00:00:00.001 Started by upstream project "autotest-per-patch" build number 132125 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.152 Fetching changes from the remote Git repository 00:00:00.154 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.271 > git --version # 'git version 2.39.2' 00:00:00.271 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.366 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.382 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.398 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.398 > git config core.sparsecheckout # timeout=10 00:00:05.412 > git read-tree -mu HEAD # timeout=10 00:00:05.430 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.458 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.459 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.624 [Pipeline] Start of Pipeline 00:00:05.639 [Pipeline] library 00:00:05.641 Loading library shm_lib@master 00:00:05.641 Library shm_lib@master is cached. Copying from home. 00:00:05.655 [Pipeline] node 00:00:20.657 Still waiting to schedule task 00:00:20.657 Waiting for next available executor on ‘vagrant-vm-host’ 00:03:35.098 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:35.100 [Pipeline] { 00:03:35.111 [Pipeline] catchError 00:03:35.112 [Pipeline] { 00:03:35.129 [Pipeline] wrap 00:03:35.139 [Pipeline] { 00:03:35.149 [Pipeline] stage 00:03:35.151 [Pipeline] { (Prologue) 00:03:35.171 [Pipeline] echo 00:03:35.172 Node: VM-host-WFP1 00:03:35.177 [Pipeline] cleanWs 00:03:35.185 [WS-CLEANUP] Deleting project workspace... 00:03:35.185 [WS-CLEANUP] Deferred wipeout is used... 00:03:35.191 [WS-CLEANUP] done 00:03:35.380 [Pipeline] setCustomBuildProperty 00:03:35.472 [Pipeline] httpRequest 00:03:35.876 [Pipeline] echo 00:03:35.878 Sorcerer 10.211.164.101 is alive 00:03:35.889 [Pipeline] retry 00:03:35.891 [Pipeline] { 00:03:35.905 [Pipeline] httpRequest 00:03:35.909 HttpMethod: GET 00:03:35.909 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:03:35.910 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:03:35.911 Response Code: HTTP/1.1 200 OK 00:03:35.912 Success: Status code 200 is in the accepted range: 200,404 00:03:35.912 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:03:36.057 [Pipeline] } 00:03:36.074 [Pipeline] // retry 00:03:36.081 [Pipeline] sh 00:03:36.356 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:03:36.371 [Pipeline] httpRequest 00:03:36.768 [Pipeline] echo 00:03:36.770 Sorcerer 10.211.164.101 is alive 00:03:36.782 [Pipeline] retry 00:03:36.784 [Pipeline] { 00:03:36.799 [Pipeline] httpRequest 00:03:36.804 HttpMethod: GET 00:03:36.805 URL: http://10.211.164.101/packages/spdk_079966333d53d26454a940d2f5a6c948e2212442.tar.gz 00:03:36.805 Sending request to url: http://10.211.164.101/packages/spdk_079966333d53d26454a940d2f5a6c948e2212442.tar.gz 00:03:36.806 Response Code: HTTP/1.1 200 OK 00:03:36.807 Success: Status code 200 is in the accepted range: 200,404 00:03:36.807 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_079966333d53d26454a940d2f5a6c948e2212442.tar.gz 00:03:39.080 [Pipeline] } 00:03:39.090 [Pipeline] // retry 00:03:39.094 [Pipeline] sh 00:03:39.366 + tar --no-same-owner -xf spdk_079966333d53d26454a940d2f5a6c948e2212442.tar.gz 00:03:41.912 [Pipeline] sh 00:03:42.194 + git -C spdk log --oneline -n5 00:03:42.194 079966333 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:03:42.194 159fecd99 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:03:42.194 6a3a0b5fb bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:03:42.194 32c6c4b3a bdev: Rename _bdev_memory_domain_io_get_buf() by bdev_io_get_bounce_buf() 00:03:42.195 1e85affe1 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:03:42.216 [Pipeline] writeFile 00:03:42.230 [Pipeline] sh 00:03:42.561 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:42.574 [Pipeline] sh 00:03:42.855 + cat autorun-spdk.conf 00:03:42.855 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:42.855 SPDK_TEST_NVMF=1 00:03:42.855 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:42.855 SPDK_TEST_USDT=1 00:03:42.855 SPDK_TEST_NVMF_MDNS=1 00:03:42.855 SPDK_RUN_UBSAN=1 00:03:42.855 NET_TYPE=virt 00:03:42.855 SPDK_JSONRPC_GO_CLIENT=1 00:03:42.855 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:42.862 RUN_NIGHTLY=0 00:03:42.865 [Pipeline] } 00:03:42.879 [Pipeline] // stage 00:03:42.897 [Pipeline] stage 00:03:42.899 [Pipeline] { (Run VM) 00:03:42.912 [Pipeline] sh 00:03:43.196 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:43.196 + echo 'Start stage prepare_nvme.sh' 00:03:43.196 Start stage prepare_nvme.sh 00:03:43.196 + [[ -n 5 ]] 00:03:43.196 + disk_prefix=ex5 00:03:43.196 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:03:43.196 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:03:43.196 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:03:43.196 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:43.196 ++ SPDK_TEST_NVMF=1 00:03:43.196 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:43.196 ++ SPDK_TEST_USDT=1 00:03:43.196 ++ SPDK_TEST_NVMF_MDNS=1 00:03:43.196 ++ SPDK_RUN_UBSAN=1 00:03:43.196 ++ NET_TYPE=virt 00:03:43.196 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:43.196 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:43.196 ++ RUN_NIGHTLY=0 00:03:43.196 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:43.196 + nvme_files=() 00:03:43.196 + declare -A nvme_files 00:03:43.196 + backend_dir=/var/lib/libvirt/images/backends 00:03:43.196 + nvme_files['nvme.img']=5G 00:03:43.196 + nvme_files['nvme-cmb.img']=5G 00:03:43.196 + nvme_files['nvme-multi0.img']=4G 00:03:43.196 + nvme_files['nvme-multi1.img']=4G 00:03:43.196 + nvme_files['nvme-multi2.img']=4G 00:03:43.196 + nvme_files['nvme-openstack.img']=8G 00:03:43.196 + nvme_files['nvme-zns.img']=5G 00:03:43.196 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:43.196 + (( SPDK_TEST_FTL == 1 )) 00:03:43.196 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:43.196 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:43.196 + for nvme in "${!nvme_files[@]}" 00:03:43.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:03:43.196 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:43.196 + for nvme in "${!nvme_files[@]}" 00:03:43.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:03:43.196 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:43.196 + for nvme in "${!nvme_files[@]}" 00:03:43.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:03:43.196 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:43.196 + for nvme in "${!nvme_files[@]}" 00:03:43.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:03:43.196 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:43.196 + for nvme in "${!nvme_files[@]}" 00:03:43.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:03:43.196 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:43.196 + for nvme in "${!nvme_files[@]}" 00:03:43.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:03:43.455 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:43.455 + for nvme in "${!nvme_files[@]}" 00:03:43.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:03:43.455 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:43.455 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:03:43.455 + echo 'End stage prepare_nvme.sh' 00:03:43.455 End stage prepare_nvme.sh 00:03:43.467 [Pipeline] sh 00:03:43.748 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:43.748 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:03:43.748 00:03:43.748 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:03:43.748 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:03:43.748 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:43.748 HELP=0 00:03:43.748 DRY_RUN=0 00:03:43.748 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:03:43.748 NVME_DISKS_TYPE=nvme,nvme, 00:03:43.748 NVME_AUTO_CREATE=0 00:03:43.748 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:03:43.748 NVME_CMB=,, 00:03:43.748 NVME_PMR=,, 00:03:43.748 NVME_ZNS=,, 00:03:43.748 NVME_MS=,, 00:03:43.748 NVME_FDP=,, 00:03:43.748 SPDK_VAGRANT_DISTRO=fedora39 00:03:43.748 SPDK_VAGRANT_VMCPU=10 00:03:43.748 SPDK_VAGRANT_VMRAM=12288 00:03:43.748 SPDK_VAGRANT_PROVIDER=libvirt 00:03:43.748 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:43.748 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:43.748 SPDK_OPENSTACK_NETWORK=0 00:03:43.748 VAGRANT_PACKAGE_BOX=0 00:03:43.748 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:43.749 FORCE_DISTRO=true 00:03:43.749 VAGRANT_BOX_VERSION= 00:03:43.749 EXTRA_VAGRANTFILES= 00:03:43.749 NIC_MODEL=e1000 00:03:43.749 00:03:43.749 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:03:43.749 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:03:46.320 Bringing machine 'default' up with 'libvirt' provider... 00:03:47.697 ==> default: Creating image (snapshot of base box volume). 00:03:47.697 ==> default: Creating domain with the following settings... 00:03:47.697 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730900007_748bd2eeac3f03dc19da 00:03:47.697 ==> default: -- Domain type: kvm 00:03:47.697 ==> default: -- Cpus: 10 00:03:47.697 ==> default: -- Feature: acpi 00:03:47.697 ==> default: -- Feature: apic 00:03:47.697 ==> default: -- Feature: pae 00:03:47.697 ==> default: -- Memory: 12288M 00:03:47.697 ==> default: -- Memory Backing: hugepages: 00:03:47.697 ==> default: -- Management MAC: 00:03:47.697 ==> default: -- Loader: 00:03:47.697 ==> default: -- Nvram: 00:03:47.697 ==> default: -- Base box: spdk/fedora39 00:03:47.697 ==> default: -- Storage pool: default 00:03:47.697 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730900007_748bd2eeac3f03dc19da.img (20G) 00:03:47.697 ==> default: -- Volume Cache: default 00:03:47.697 ==> default: -- Kernel: 00:03:47.697 ==> default: -- Initrd: 00:03:47.697 ==> default: -- Graphics Type: vnc 00:03:47.697 ==> default: -- Graphics Port: -1 00:03:47.697 ==> default: -- Graphics IP: 127.0.0.1 00:03:47.697 ==> default: -- Graphics Password: Not defined 00:03:47.697 ==> default: -- Video Type: cirrus 00:03:47.697 ==> default: -- Video VRAM: 9216 00:03:47.697 ==> default: -- Sound Type: 00:03:47.697 ==> default: -- Keymap: en-us 00:03:47.697 ==> default: -- TPM Path: 00:03:47.697 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:47.697 ==> default: -- Command line args: 00:03:47.697 ==> default: -> value=-device, 00:03:47.697 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:47.697 ==> default: -> value=-drive, 00:03:47.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:03:47.697 ==> default: -> value=-device, 00:03:47.698 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:47.698 ==> default: -> value=-device, 00:03:47.698 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:47.698 ==> default: -> value=-drive, 00:03:47.698 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:47.698 ==> default: -> value=-device, 00:03:47.698 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:47.698 ==> default: -> value=-drive, 00:03:47.698 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:47.698 ==> default: -> value=-device, 00:03:47.698 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:47.698 ==> default: -> value=-drive, 00:03:47.698 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:47.698 ==> default: -> value=-device, 00:03:47.698 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:48.265 ==> default: Creating shared folders metadata... 00:03:48.265 ==> default: Starting domain. 00:03:50.840 ==> default: Waiting for domain to get an IP address... 00:04:08.934 ==> default: Waiting for SSH to become available... 00:04:08.934 ==> default: Configuring and enabling network interfaces... 00:04:14.233 default: SSH address: 192.168.121.115:22 00:04:14.233 default: SSH username: vagrant 00:04:14.233 default: SSH auth method: private key 00:04:16.134 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:24.278 ==> default: Mounting SSHFS shared folder... 00:04:26.207 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:26.207 ==> default: Checking Mount.. 00:04:28.131 ==> default: Folder Successfully Mounted! 00:04:28.131 ==> default: Running provisioner: file... 00:04:29.068 default: ~/.gitconfig => .gitconfig 00:04:29.658 00:04:29.658 SUCCESS! 00:04:29.658 00:04:29.658 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:29.658 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:29.658 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:29.658 00:04:29.666 [Pipeline] } 00:04:29.681 [Pipeline] // stage 00:04:29.690 [Pipeline] dir 00:04:29.691 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:04:29.692 [Pipeline] { 00:04:29.705 [Pipeline] catchError 00:04:29.708 [Pipeline] { 00:04:29.724 [Pipeline] sh 00:04:30.006 + vagrant ssh-config --host vagrant 00:04:30.006 + sed -ne /^Host/,$p 00:04:30.006 + tee ssh_conf 00:04:33.327 Host vagrant 00:04:33.327 HostName 192.168.121.115 00:04:33.327 User vagrant 00:04:33.327 Port 22 00:04:33.327 UserKnownHostsFile /dev/null 00:04:33.327 StrictHostKeyChecking no 00:04:33.327 PasswordAuthentication no 00:04:33.327 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:33.327 IdentitiesOnly yes 00:04:33.327 LogLevel FATAL 00:04:33.327 ForwardAgent yes 00:04:33.327 ForwardX11 yes 00:04:33.327 00:04:33.342 [Pipeline] withEnv 00:04:33.345 [Pipeline] { 00:04:33.359 [Pipeline] sh 00:04:33.640 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:33.640 source /etc/os-release 00:04:33.640 [[ -e /image.version ]] && img=$(< /image.version) 00:04:33.640 # Minimal, systemd-like check. 00:04:33.640 if [[ -e /.dockerenv ]]; then 00:04:33.640 # Clear garbage from the node's name: 00:04:33.640 # agt-er_autotest_547-896 -> autotest_547-896 00:04:33.640 # $HOSTNAME is the actual container id 00:04:33.640 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:33.640 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:33.640 # We can assume this is a mount from a host where container is running, 00:04:33.640 # so fetch its hostname to easily identify the target swarm worker. 00:04:33.640 container="$(< /etc/hostname) ($agent)" 00:04:33.640 else 00:04:33.640 # Fallback 00:04:33.640 container=$agent 00:04:33.640 fi 00:04:33.640 fi 00:04:33.640 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:33.640 00:04:33.911 [Pipeline] } 00:04:33.927 [Pipeline] // withEnv 00:04:33.937 [Pipeline] setCustomBuildProperty 00:04:33.952 [Pipeline] stage 00:04:33.953 [Pipeline] { (Tests) 00:04:33.969 [Pipeline] sh 00:04:34.252 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:34.545 [Pipeline] sh 00:04:34.827 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:35.101 [Pipeline] timeout 00:04:35.101 Timeout set to expire in 1 hr 0 min 00:04:35.103 [Pipeline] { 00:04:35.118 [Pipeline] sh 00:04:35.453 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:36.053 HEAD is now at 079966333 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:04:36.066 [Pipeline] sh 00:04:36.347 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:36.622 [Pipeline] sh 00:04:36.905 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:37.181 [Pipeline] sh 00:04:37.482 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:04:37.748 ++ readlink -f spdk_repo 00:04:37.748 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:37.748 + [[ -n /home/vagrant/spdk_repo ]] 00:04:37.748 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:37.748 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:37.748 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:37.748 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:37.748 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:37.748 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:04:37.748 + cd /home/vagrant/spdk_repo 00:04:37.748 + source /etc/os-release 00:04:37.748 ++ NAME='Fedora Linux' 00:04:37.748 ++ VERSION='39 (Cloud Edition)' 00:04:37.748 ++ ID=fedora 00:04:37.748 ++ VERSION_ID=39 00:04:37.748 ++ VERSION_CODENAME= 00:04:37.748 ++ PLATFORM_ID=platform:f39 00:04:37.748 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:37.748 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:37.748 ++ LOGO=fedora-logo-icon 00:04:37.748 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:37.748 ++ HOME_URL=https://fedoraproject.org/ 00:04:37.748 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:37.748 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:37.748 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:37.748 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:37.748 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:37.748 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:37.748 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:37.748 ++ SUPPORT_END=2024-11-12 00:04:37.748 ++ VARIANT='Cloud Edition' 00:04:37.748 ++ VARIANT_ID=cloud 00:04:37.748 + uname -a 00:04:37.748 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:37.748 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.316 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.316 Hugepages 00:04:38.316 node hugesize free / total 00:04:38.316 node0 1048576kB 0 / 0 00:04:38.316 node0 2048kB 0 / 0 00:04:38.316 00:04:38.316 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.316 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:38.316 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:38.316 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:38.316 + rm -f /tmp/spdk-ld-path 00:04:38.316 + source autorun-spdk.conf 00:04:38.316 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:38.316 ++ SPDK_TEST_NVMF=1 00:04:38.316 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:38.316 ++ SPDK_TEST_USDT=1 00:04:38.316 ++ SPDK_TEST_NVMF_MDNS=1 00:04:38.316 ++ SPDK_RUN_UBSAN=1 00:04:38.316 ++ NET_TYPE=virt 00:04:38.316 ++ SPDK_JSONRPC_GO_CLIENT=1 00:04:38.316 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:38.316 ++ RUN_NIGHTLY=0 00:04:38.316 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:38.316 + [[ -n '' ]] 00:04:38.316 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:38.575 + for M in /var/spdk/build-*-manifest.txt 00:04:38.575 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:38.575 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:38.575 + for M in /var/spdk/build-*-manifest.txt 00:04:38.575 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:38.575 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:38.575 + for M in /var/spdk/build-*-manifest.txt 00:04:38.575 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:38.575 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:38.575 ++ uname 00:04:38.575 + [[ Linux == \L\i\n\u\x ]] 00:04:38.575 + sudo dmesg -T 00:04:38.575 + sudo dmesg --clear 00:04:38.575 + dmesg_pid=5206 00:04:38.575 + [[ Fedora Linux == FreeBSD ]] 00:04:38.575 + sudo dmesg -Tw 00:04:38.575 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:38.575 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:38.575 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:38.575 + [[ -x /usr/src/fio-static/fio ]] 00:04:38.575 + export FIO_BIN=/usr/src/fio-static/fio 00:04:38.575 + FIO_BIN=/usr/src/fio-static/fio 00:04:38.575 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:38.575 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:38.575 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:38.575 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:38.575 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:38.575 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:38.575 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:38.575 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:38.575 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:38.836 13:34:19 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:38.836 13:34:19 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:38.836 13:34:19 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:04:38.836 13:34:19 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:38.836 13:34:19 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:38.836 13:34:19 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:38.836 13:34:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:38.836 13:34:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:38.836 13:34:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:38.836 13:34:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.836 13:34:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.836 13:34:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.836 13:34:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.836 13:34:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.836 13:34:19 -- paths/export.sh@5 -- $ export PATH 00:04:38.836 13:34:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.836 13:34:19 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:38.836 13:34:19 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:38.836 13:34:19 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730900059.XXXXXX 00:04:38.836 13:34:19 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730900059.4jMUNf 00:04:38.836 13:34:19 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:38.836 13:34:19 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:38.836 13:34:19 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:38.836 13:34:19 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:38.836 13:34:19 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:38.836 13:34:19 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:38.836 13:34:19 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:38.836 13:34:19 -- common/autotest_common.sh@10 -- $ set +x 00:04:38.836 13:34:19 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:04:38.836 13:34:19 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:38.836 13:34:19 -- pm/common@17 -- $ local monitor 00:04:38.836 13:34:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.836 13:34:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.836 13:34:19 -- pm/common@25 -- $ sleep 1 00:04:38.836 13:34:19 -- pm/common@21 -- $ date +%s 00:04:38.836 13:34:19 -- pm/common@21 -- $ date +%s 00:04:38.836 13:34:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730900059 00:04:38.836 13:34:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730900059 00:04:38.836 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730900059_collect-vmstat.pm.log 00:04:38.836 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730900059_collect-cpu-load.pm.log 00:04:39.773 13:34:20 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:39.773 13:34:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:39.773 13:34:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:39.773 13:34:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:39.773 13:34:20 -- spdk/autobuild.sh@16 -- $ date -u 00:04:39.773 Wed Nov 6 01:34:20 PM UTC 2024 00:04:39.773 13:34:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:39.773 v25.01-pre-186-g079966333 00:04:39.773 13:34:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:39.773 13:34:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:39.773 13:34:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:39.773 13:34:20 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:04:39.773 13:34:20 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:04:39.773 13:34:20 -- common/autotest_common.sh@10 -- $ set +x 00:04:39.773 ************************************ 00:04:39.773 START TEST ubsan 00:04:39.773 ************************************ 00:04:39.773 using ubsan 00:04:39.773 13:34:20 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:04:39.773 00:04:39.773 real 0m0.001s 00:04:39.773 user 0m0.001s 00:04:39.773 sys 0m0.000s 00:04:39.773 13:34:20 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:39.773 13:34:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:39.773 ************************************ 00:04:39.773 END TEST ubsan 00:04:39.773 ************************************ 00:04:40.031 13:34:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:40.031 13:34:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:40.031 13:34:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:40.031 13:34:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:40.031 13:34:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:40.031 13:34:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:40.031 13:34:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:40.031 13:34:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:40.031 13:34:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:04:40.031 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:40.031 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:40.598 Using 'verbs' RDMA provider 00:04:56.859 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:14.982 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:14.982 go version go1.21.1 linux/amd64 00:05:14.982 Creating mk/config.mk...done. 00:05:14.982 Creating mk/cc.flags.mk...done. 00:05:14.982 Type 'make' to build. 00:05:14.982 13:34:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:14.982 13:34:55 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:14.982 13:34:55 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:14.982 13:34:55 -- common/autotest_common.sh@10 -- $ set +x 00:05:14.982 ************************************ 00:05:14.982 START TEST make 00:05:14.982 ************************************ 00:05:14.982 13:34:55 make -- common/autotest_common.sh@1127 -- $ make -j10 00:05:14.982 make[1]: Nothing to be done for 'all'. 00:05:29.876 The Meson build system 00:05:29.876 Version: 1.5.0 00:05:29.876 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:29.876 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:29.876 Build type: native build 00:05:29.876 Program cat found: YES (/usr/bin/cat) 00:05:29.876 Project name: DPDK 00:05:29.876 Project version: 24.03.0 00:05:29.876 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:29.876 C linker for the host machine: cc ld.bfd 2.40-14 00:05:29.876 Host machine cpu family: x86_64 00:05:29.876 Host machine cpu: x86_64 00:05:29.876 Message: ## Building in Developer Mode ## 00:05:29.876 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:29.876 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:29.876 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:29.876 Program python3 found: YES (/usr/bin/python3) 00:05:29.876 Program cat found: YES (/usr/bin/cat) 00:05:29.876 Compiler for C supports arguments -march=native: YES 00:05:29.876 Checking for size of "void *" : 8 00:05:29.876 Checking for size of "void *" : 8 (cached) 00:05:29.876 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:29.876 Library m found: YES 00:05:29.876 Library numa found: YES 00:05:29.876 Has header "numaif.h" : YES 00:05:29.876 Library fdt found: NO 00:05:29.876 Library execinfo found: NO 00:05:29.876 Has header "execinfo.h" : YES 00:05:29.876 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:29.876 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:29.876 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:29.876 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:29.876 Run-time dependency openssl found: YES 3.1.1 00:05:29.876 Run-time dependency libpcap found: YES 1.10.4 00:05:29.876 Has header "pcap.h" with dependency libpcap: YES 00:05:29.876 Compiler for C supports arguments -Wcast-qual: YES 00:05:29.876 Compiler for C supports arguments -Wdeprecated: YES 00:05:29.876 Compiler for C supports arguments -Wformat: YES 00:05:29.876 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:29.876 Compiler for C supports arguments -Wformat-security: NO 00:05:29.876 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:29.876 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:29.876 Compiler for C supports arguments -Wnested-externs: YES 00:05:29.876 Compiler for C supports arguments -Wold-style-definition: YES 00:05:29.876 Compiler for C supports arguments -Wpointer-arith: YES 00:05:29.876 Compiler for C supports arguments -Wsign-compare: YES 00:05:29.876 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:29.876 Compiler for C supports arguments -Wundef: YES 00:05:29.876 Compiler for C supports arguments -Wwrite-strings: YES 00:05:29.876 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:29.876 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:29.876 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:29.876 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:29.876 Program objdump found: YES (/usr/bin/objdump) 00:05:29.876 Compiler for C supports arguments -mavx512f: YES 00:05:29.876 Checking if "AVX512 checking" compiles: YES 00:05:29.876 Fetching value of define "__SSE4_2__" : 1 00:05:29.876 Fetching value of define "__AES__" : 1 00:05:29.876 Fetching value of define "__AVX__" : 1 00:05:29.876 Fetching value of define "__AVX2__" : 1 00:05:29.876 Fetching value of define "__AVX512BW__" : 1 00:05:29.876 Fetching value of define "__AVX512CD__" : 1 00:05:29.876 Fetching value of define "__AVX512DQ__" : 1 00:05:29.876 Fetching value of define "__AVX512F__" : 1 00:05:29.876 Fetching value of define "__AVX512VL__" : 1 00:05:29.876 Fetching value of define "__PCLMUL__" : 1 00:05:29.876 Fetching value of define "__RDRND__" : 1 00:05:29.877 Fetching value of define "__RDSEED__" : 1 00:05:29.877 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:29.877 Fetching value of define "__znver1__" : (undefined) 00:05:29.877 Fetching value of define "__znver2__" : (undefined) 00:05:29.877 Fetching value of define "__znver3__" : (undefined) 00:05:29.877 Fetching value of define "__znver4__" : (undefined) 00:05:29.877 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:29.877 Message: lib/log: Defining dependency "log" 00:05:29.877 Message: lib/kvargs: Defining dependency "kvargs" 00:05:29.877 Message: lib/telemetry: Defining dependency "telemetry" 00:05:29.877 Checking for function "getentropy" : NO 00:05:29.877 Message: lib/eal: Defining dependency "eal" 00:05:29.877 Message: lib/ring: Defining dependency "ring" 00:05:29.877 Message: lib/rcu: Defining dependency "rcu" 00:05:29.877 Message: lib/mempool: Defining dependency "mempool" 00:05:29.877 Message: lib/mbuf: Defining dependency "mbuf" 00:05:29.877 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:29.877 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:29.877 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:29.877 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:29.877 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:29.877 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:29.877 Compiler for C supports arguments -mpclmul: YES 00:05:29.877 Compiler for C supports arguments -maes: YES 00:05:29.877 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:29.877 Compiler for C supports arguments -mavx512bw: YES 00:05:29.877 Compiler for C supports arguments -mavx512dq: YES 00:05:29.877 Compiler for C supports arguments -mavx512vl: YES 00:05:29.877 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:29.877 Compiler for C supports arguments -mavx2: YES 00:05:29.877 Compiler for C supports arguments -mavx: YES 00:05:29.877 Message: lib/net: Defining dependency "net" 00:05:29.877 Message: lib/meter: Defining dependency "meter" 00:05:29.877 Message: lib/ethdev: Defining dependency "ethdev" 00:05:29.877 Message: lib/pci: Defining dependency "pci" 00:05:29.877 Message: lib/cmdline: Defining dependency "cmdline" 00:05:29.877 Message: lib/hash: Defining dependency "hash" 00:05:29.877 Message: lib/timer: Defining dependency "timer" 00:05:29.877 Message: lib/compressdev: Defining dependency "compressdev" 00:05:29.877 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:29.877 Message: lib/dmadev: Defining dependency "dmadev" 00:05:29.877 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:29.877 Message: lib/power: Defining dependency "power" 00:05:29.877 Message: lib/reorder: Defining dependency "reorder" 00:05:29.877 Message: lib/security: Defining dependency "security" 00:05:29.877 Has header "linux/userfaultfd.h" : YES 00:05:29.877 Has header "linux/vduse.h" : YES 00:05:29.877 Message: lib/vhost: Defining dependency "vhost" 00:05:29.877 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:29.877 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:29.877 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:29.877 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:29.877 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:29.877 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:29.877 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:29.877 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:29.877 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:29.877 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:29.877 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:29.877 Configuring doxy-api-html.conf using configuration 00:05:29.877 Configuring doxy-api-man.conf using configuration 00:05:29.877 Program mandb found: YES (/usr/bin/mandb) 00:05:29.877 Program sphinx-build found: NO 00:05:29.877 Configuring rte_build_config.h using configuration 00:05:29.877 Message: 00:05:29.877 ================= 00:05:29.877 Applications Enabled 00:05:29.877 ================= 00:05:29.877 00:05:29.877 apps: 00:05:29.877 00:05:29.877 00:05:29.877 Message: 00:05:29.877 ================= 00:05:29.877 Libraries Enabled 00:05:29.877 ================= 00:05:29.877 00:05:29.877 libs: 00:05:29.877 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:29.877 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:29.877 cryptodev, dmadev, power, reorder, security, vhost, 00:05:29.877 00:05:29.877 Message: 00:05:29.877 =============== 00:05:29.877 Drivers Enabled 00:05:29.877 =============== 00:05:29.877 00:05:29.877 common: 00:05:29.877 00:05:29.877 bus: 00:05:29.877 pci, vdev, 00:05:29.877 mempool: 00:05:29.877 ring, 00:05:29.877 dma: 00:05:29.877 00:05:29.877 net: 00:05:29.877 00:05:29.877 crypto: 00:05:29.877 00:05:29.877 compress: 00:05:29.877 00:05:29.877 vdpa: 00:05:29.877 00:05:29.877 00:05:29.877 Message: 00:05:29.877 ================= 00:05:29.877 Content Skipped 00:05:29.877 ================= 00:05:29.877 00:05:29.877 apps: 00:05:29.877 dumpcap: explicitly disabled via build config 00:05:29.877 graph: explicitly disabled via build config 00:05:29.877 pdump: explicitly disabled via build config 00:05:29.877 proc-info: explicitly disabled via build config 00:05:29.877 test-acl: explicitly disabled via build config 00:05:29.877 test-bbdev: explicitly disabled via build config 00:05:29.877 test-cmdline: explicitly disabled via build config 00:05:29.877 test-compress-perf: explicitly disabled via build config 00:05:29.877 test-crypto-perf: explicitly disabled via build config 00:05:29.877 test-dma-perf: explicitly disabled via build config 00:05:29.877 test-eventdev: explicitly disabled via build config 00:05:29.877 test-fib: explicitly disabled via build config 00:05:29.877 test-flow-perf: explicitly disabled via build config 00:05:29.877 test-gpudev: explicitly disabled via build config 00:05:29.877 test-mldev: explicitly disabled via build config 00:05:29.877 test-pipeline: explicitly disabled via build config 00:05:29.877 test-pmd: explicitly disabled via build config 00:05:29.877 test-regex: explicitly disabled via build config 00:05:29.877 test-sad: explicitly disabled via build config 00:05:29.877 test-security-perf: explicitly disabled via build config 00:05:29.877 00:05:29.877 libs: 00:05:29.877 argparse: explicitly disabled via build config 00:05:29.877 metrics: explicitly disabled via build config 00:05:29.877 acl: explicitly disabled via build config 00:05:29.877 bbdev: explicitly disabled via build config 00:05:29.877 bitratestats: explicitly disabled via build config 00:05:29.877 bpf: explicitly disabled via build config 00:05:29.877 cfgfile: explicitly disabled via build config 00:05:29.877 distributor: explicitly disabled via build config 00:05:29.877 efd: explicitly disabled via build config 00:05:29.877 eventdev: explicitly disabled via build config 00:05:29.877 dispatcher: explicitly disabled via build config 00:05:29.877 gpudev: explicitly disabled via build config 00:05:29.877 gro: explicitly disabled via build config 00:05:29.877 gso: explicitly disabled via build config 00:05:29.877 ip_frag: explicitly disabled via build config 00:05:29.877 jobstats: explicitly disabled via build config 00:05:29.877 latencystats: explicitly disabled via build config 00:05:29.877 lpm: explicitly disabled via build config 00:05:29.877 member: explicitly disabled via build config 00:05:29.877 pcapng: explicitly disabled via build config 00:05:29.877 rawdev: explicitly disabled via build config 00:05:29.877 regexdev: explicitly disabled via build config 00:05:29.877 mldev: explicitly disabled via build config 00:05:29.877 rib: explicitly disabled via build config 00:05:29.877 sched: explicitly disabled via build config 00:05:29.877 stack: explicitly disabled via build config 00:05:29.877 ipsec: explicitly disabled via build config 00:05:29.877 pdcp: explicitly disabled via build config 00:05:29.877 fib: explicitly disabled via build config 00:05:29.877 port: explicitly disabled via build config 00:05:29.877 pdump: explicitly disabled via build config 00:05:29.877 table: explicitly disabled via build config 00:05:29.877 pipeline: explicitly disabled via build config 00:05:29.877 graph: explicitly disabled via build config 00:05:29.877 node: explicitly disabled via build config 00:05:29.877 00:05:29.877 drivers: 00:05:29.877 common/cpt: not in enabled drivers build config 00:05:29.877 common/dpaax: not in enabled drivers build config 00:05:29.877 common/iavf: not in enabled drivers build config 00:05:29.877 common/idpf: not in enabled drivers build config 00:05:29.877 common/ionic: not in enabled drivers build config 00:05:29.877 common/mvep: not in enabled drivers build config 00:05:29.877 common/octeontx: not in enabled drivers build config 00:05:29.877 bus/auxiliary: not in enabled drivers build config 00:05:29.877 bus/cdx: not in enabled drivers build config 00:05:29.877 bus/dpaa: not in enabled drivers build config 00:05:29.877 bus/fslmc: not in enabled drivers build config 00:05:29.877 bus/ifpga: not in enabled drivers build config 00:05:29.877 bus/platform: not in enabled drivers build config 00:05:29.877 bus/uacce: not in enabled drivers build config 00:05:29.877 bus/vmbus: not in enabled drivers build config 00:05:29.877 common/cnxk: not in enabled drivers build config 00:05:29.877 common/mlx5: not in enabled drivers build config 00:05:29.877 common/nfp: not in enabled drivers build config 00:05:29.877 common/nitrox: not in enabled drivers build config 00:05:29.877 common/qat: not in enabled drivers build config 00:05:29.877 common/sfc_efx: not in enabled drivers build config 00:05:29.877 mempool/bucket: not in enabled drivers build config 00:05:29.877 mempool/cnxk: not in enabled drivers build config 00:05:29.877 mempool/dpaa: not in enabled drivers build config 00:05:29.877 mempool/dpaa2: not in enabled drivers build config 00:05:29.877 mempool/octeontx: not in enabled drivers build config 00:05:29.877 mempool/stack: not in enabled drivers build config 00:05:29.877 dma/cnxk: not in enabled drivers build config 00:05:29.877 dma/dpaa: not in enabled drivers build config 00:05:29.877 dma/dpaa2: not in enabled drivers build config 00:05:29.877 dma/hisilicon: not in enabled drivers build config 00:05:29.877 dma/idxd: not in enabled drivers build config 00:05:29.877 dma/ioat: not in enabled drivers build config 00:05:29.877 dma/skeleton: not in enabled drivers build config 00:05:29.877 net/af_packet: not in enabled drivers build config 00:05:29.878 net/af_xdp: not in enabled drivers build config 00:05:29.878 net/ark: not in enabled drivers build config 00:05:29.878 net/atlantic: not in enabled drivers build config 00:05:29.878 net/avp: not in enabled drivers build config 00:05:29.878 net/axgbe: not in enabled drivers build config 00:05:29.878 net/bnx2x: not in enabled drivers build config 00:05:29.878 net/bnxt: not in enabled drivers build config 00:05:29.878 net/bonding: not in enabled drivers build config 00:05:29.878 net/cnxk: not in enabled drivers build config 00:05:29.878 net/cpfl: not in enabled drivers build config 00:05:29.878 net/cxgbe: not in enabled drivers build config 00:05:29.878 net/dpaa: not in enabled drivers build config 00:05:29.878 net/dpaa2: not in enabled drivers build config 00:05:29.878 net/e1000: not in enabled drivers build config 00:05:29.878 net/ena: not in enabled drivers build config 00:05:29.878 net/enetc: not in enabled drivers build config 00:05:29.878 net/enetfec: not in enabled drivers build config 00:05:29.878 net/enic: not in enabled drivers build config 00:05:29.878 net/failsafe: not in enabled drivers build config 00:05:29.878 net/fm10k: not in enabled drivers build config 00:05:29.878 net/gve: not in enabled drivers build config 00:05:29.878 net/hinic: not in enabled drivers build config 00:05:29.878 net/hns3: not in enabled drivers build config 00:05:29.878 net/i40e: not in enabled drivers build config 00:05:29.878 net/iavf: not in enabled drivers build config 00:05:29.878 net/ice: not in enabled drivers build config 00:05:29.878 net/idpf: not in enabled drivers build config 00:05:29.878 net/igc: not in enabled drivers build config 00:05:29.878 net/ionic: not in enabled drivers build config 00:05:29.878 net/ipn3ke: not in enabled drivers build config 00:05:29.878 net/ixgbe: not in enabled drivers build config 00:05:29.878 net/mana: not in enabled drivers build config 00:05:29.878 net/memif: not in enabled drivers build config 00:05:29.878 net/mlx4: not in enabled drivers build config 00:05:29.878 net/mlx5: not in enabled drivers build config 00:05:29.878 net/mvneta: not in enabled drivers build config 00:05:29.878 net/mvpp2: not in enabled drivers build config 00:05:29.878 net/netvsc: not in enabled drivers build config 00:05:29.878 net/nfb: not in enabled drivers build config 00:05:29.878 net/nfp: not in enabled drivers build config 00:05:29.878 net/ngbe: not in enabled drivers build config 00:05:29.878 net/null: not in enabled drivers build config 00:05:29.878 net/octeontx: not in enabled drivers build config 00:05:29.878 net/octeon_ep: not in enabled drivers build config 00:05:29.878 net/pcap: not in enabled drivers build config 00:05:29.878 net/pfe: not in enabled drivers build config 00:05:29.878 net/qede: not in enabled drivers build config 00:05:29.878 net/ring: not in enabled drivers build config 00:05:29.878 net/sfc: not in enabled drivers build config 00:05:29.878 net/softnic: not in enabled drivers build config 00:05:29.878 net/tap: not in enabled drivers build config 00:05:29.878 net/thunderx: not in enabled drivers build config 00:05:29.878 net/txgbe: not in enabled drivers build config 00:05:29.878 net/vdev_netvsc: not in enabled drivers build config 00:05:29.878 net/vhost: not in enabled drivers build config 00:05:29.878 net/virtio: not in enabled drivers build config 00:05:29.878 net/vmxnet3: not in enabled drivers build config 00:05:29.878 raw/*: missing internal dependency, "rawdev" 00:05:29.878 crypto/armv8: not in enabled drivers build config 00:05:29.878 crypto/bcmfs: not in enabled drivers build config 00:05:29.878 crypto/caam_jr: not in enabled drivers build config 00:05:29.878 crypto/ccp: not in enabled drivers build config 00:05:29.878 crypto/cnxk: not in enabled drivers build config 00:05:29.878 crypto/dpaa_sec: not in enabled drivers build config 00:05:29.878 crypto/dpaa2_sec: not in enabled drivers build config 00:05:29.878 crypto/ipsec_mb: not in enabled drivers build config 00:05:29.878 crypto/mlx5: not in enabled drivers build config 00:05:29.878 crypto/mvsam: not in enabled drivers build config 00:05:29.878 crypto/nitrox: not in enabled drivers build config 00:05:29.878 crypto/null: not in enabled drivers build config 00:05:29.878 crypto/octeontx: not in enabled drivers build config 00:05:29.878 crypto/openssl: not in enabled drivers build config 00:05:29.878 crypto/scheduler: not in enabled drivers build config 00:05:29.878 crypto/uadk: not in enabled drivers build config 00:05:29.878 crypto/virtio: not in enabled drivers build config 00:05:29.878 compress/isal: not in enabled drivers build config 00:05:29.878 compress/mlx5: not in enabled drivers build config 00:05:29.878 compress/nitrox: not in enabled drivers build config 00:05:29.878 compress/octeontx: not in enabled drivers build config 00:05:29.878 compress/zlib: not in enabled drivers build config 00:05:29.878 regex/*: missing internal dependency, "regexdev" 00:05:29.878 ml/*: missing internal dependency, "mldev" 00:05:29.878 vdpa/ifc: not in enabled drivers build config 00:05:29.878 vdpa/mlx5: not in enabled drivers build config 00:05:29.878 vdpa/nfp: not in enabled drivers build config 00:05:29.878 vdpa/sfc: not in enabled drivers build config 00:05:29.878 event/*: missing internal dependency, "eventdev" 00:05:29.878 baseband/*: missing internal dependency, "bbdev" 00:05:29.878 gpu/*: missing internal dependency, "gpudev" 00:05:29.878 00:05:29.878 00:05:29.878 Build targets in project: 85 00:05:29.878 00:05:29.878 DPDK 24.03.0 00:05:29.878 00:05:29.878 User defined options 00:05:29.878 buildtype : debug 00:05:29.878 default_library : shared 00:05:29.878 libdir : lib 00:05:29.878 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.878 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:29.878 c_link_args : 00:05:29.878 cpu_instruction_set: native 00:05:29.878 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:29.878 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:29.878 enable_docs : false 00:05:29.878 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:29.878 enable_kmods : false 00:05:29.878 max_lcores : 128 00:05:29.878 tests : false 00:05:29.878 00:05:29.878 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:29.878 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:29.878 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:29.878 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:29.878 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:29.878 [4/268] Linking static target lib/librte_kvargs.a 00:05:29.878 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:29.878 [6/268] Linking static target lib/librte_log.a 00:05:29.878 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:29.878 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:29.878 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:29.878 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.878 [11/268] Linking static target lib/librte_telemetry.a 00:05:29.878 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:29.878 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:29.878 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:29.878 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:29.878 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:29.878 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:29.878 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:29.878 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:29.878 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.878 [21/268] Linking target lib/librte_log.so.24.1 00:05:29.878 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:30.137 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:30.137 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:30.137 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:30.137 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:30.137 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:30.137 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:30.137 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:30.137 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.137 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:30.137 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:30.394 [33/268] Linking target lib/librte_kvargs.so.24.1 00:05:30.394 [34/268] Linking target lib/librte_telemetry.so.24.1 00:05:30.652 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:30.652 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:30.652 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:30.652 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:30.652 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:30.652 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:30.652 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:30.652 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:30.652 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:30.911 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:30.911 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:30.911 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:30.911 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:30.911 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:31.171 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:31.171 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:31.171 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:31.171 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:31.429 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:31.429 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:31.429 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:31.429 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:31.429 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:31.429 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:31.429 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:31.689 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:31.689 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:31.689 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:31.689 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:31.689 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:31.949 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:31.949 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:31.949 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:31.949 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:31.949 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:31.949 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:32.208 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:32.208 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:32.208 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:32.208 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:32.208 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:32.208 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:32.467 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:32.467 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:32.467 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:32.467 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:32.467 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:32.467 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:32.725 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:32.725 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:32.725 [85/268] Linking static target lib/librte_eal.a 00:05:32.725 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:32.725 [87/268] Linking static target lib/librte_ring.a 00:05:32.984 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:32.984 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:32.984 [90/268] Linking static target lib/librte_rcu.a 00:05:32.984 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:32.984 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:32.984 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:32.984 [94/268] Linking static target lib/librte_mempool.a 00:05:33.244 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:33.244 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:33.244 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.244 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:33.244 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:33.244 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:33.503 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.503 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:33.503 [103/268] Linking static target lib/librte_mbuf.a 00:05:33.503 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:33.761 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:33.761 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:33.761 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:34.020 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:34.020 [109/268] Linking static target lib/librte_meter.a 00:05:34.020 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:34.020 [111/268] Linking static target lib/librte_net.a 00:05:34.020 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:34.020 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:34.020 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:34.279 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:34.280 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.280 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.539 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.539 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:34.539 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:34.797 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.797 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:34.797 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:35.057 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:35.057 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:35.057 [126/268] Linking static target lib/librte_pci.a 00:05:35.057 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:35.316 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:35.316 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:35.316 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:35.316 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:35.316 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:35.316 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.316 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:35.316 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:35.576 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:35.576 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:35.576 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:35.576 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:35.576 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:35.576 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:35.576 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:35.576 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:35.576 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:35.576 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:35.576 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:35.835 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:35.835 [148/268] Linking static target lib/librte_ethdev.a 00:05:35.835 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:35.835 [150/268] Linking static target lib/librte_cmdline.a 00:05:35.835 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:36.095 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:36.095 [153/268] Linking static target lib/librte_timer.a 00:05:36.095 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:36.354 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:36.354 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:36.354 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:36.354 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:36.354 [159/268] Linking static target lib/librte_compressdev.a 00:05:36.354 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:36.354 [161/268] Linking static target lib/librte_hash.a 00:05:36.613 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:36.613 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.613 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:36.871 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:36.871 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:36.871 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:36.871 [168/268] Linking static target lib/librte_dmadev.a 00:05:36.871 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:37.130 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:37.130 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:37.130 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:37.130 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:37.130 [174/268] Linking static target lib/librte_cryptodev.a 00:05:37.388 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:37.388 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.646 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.646 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:37.646 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:37.646 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.646 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:37.906 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:37.906 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.906 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:38.165 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:38.165 [186/268] Linking static target lib/librte_power.a 00:05:38.165 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:38.165 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:38.165 [189/268] Linking static target lib/librte_reorder.a 00:05:38.424 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:38.424 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:38.424 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:38.424 [193/268] Linking static target lib/librte_security.a 00:05:38.682 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:38.682 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.941 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:39.200 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:39.200 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:39.458 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:39.458 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.458 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.458 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:39.717 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:39.717 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:39.975 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.975 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:39.975 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:39.975 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:39.975 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:39.975 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:39.975 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:39.975 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:40.234 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:40.234 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:40.234 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:40.234 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:40.234 [217/268] Linking static target drivers/librte_bus_vdev.a 00:05:40.234 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:40.234 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:40.234 [220/268] Linking static target drivers/librte_bus_pci.a 00:05:40.234 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:40.234 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:40.490 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.490 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:40.490 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:40.490 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:40.490 [227/268] Linking static target drivers/librte_mempool_ring.a 00:05:40.748 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.312 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:41.312 [230/268] Linking static target lib/librte_vhost.a 00:05:43.834 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.729 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.729 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.729 [234/268] Linking target lib/librte_eal.so.24.1 00:05:45.987 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:45.987 [236/268] Linking target lib/librte_meter.so.24.1 00:05:45.987 [237/268] Linking target lib/librte_timer.so.24.1 00:05:45.987 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:45.987 [239/268] Linking target lib/librte_ring.so.24.1 00:05:45.987 [240/268] Linking target lib/librte_pci.so.24.1 00:05:45.987 [241/268] Linking target lib/librte_dmadev.so.24.1 00:05:45.987 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:45.987 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:45.987 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:45.987 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:45.987 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:46.245 [247/268] Linking target lib/librte_mempool.so.24.1 00:05:46.245 [248/268] Linking target lib/librte_rcu.so.24.1 00:05:46.245 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:46.245 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:46.245 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:46.245 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:46.245 [253/268] Linking target lib/librte_mbuf.so.24.1 00:05:46.503 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:46.503 [255/268] Linking target lib/librte_reorder.so.24.1 00:05:46.503 [256/268] Linking target lib/librte_compressdev.so.24.1 00:05:46.503 [257/268] Linking target lib/librte_net.so.24.1 00:05:46.503 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:05:46.762 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:46.762 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:46.762 [261/268] Linking target lib/librte_cmdline.so.24.1 00:05:46.762 [262/268] Linking target lib/librte_hash.so.24.1 00:05:46.762 [263/268] Linking target lib/librte_security.so.24.1 00:05:46.762 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:47.020 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:47.020 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:47.020 [267/268] Linking target lib/librte_power.so.24.1 00:05:47.020 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:47.020 INFO: autodetecting backend as ninja 00:05:47.020 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:08.987 CC lib/log/log.o 00:06:08.987 CC lib/log/log_flags.o 00:06:08.987 CC lib/log/log_deprecated.o 00:06:08.987 CC lib/ut/ut.o 00:06:08.987 CC lib/ut_mock/mock.o 00:06:08.987 LIB libspdk_ut.a 00:06:08.987 LIB libspdk_log.a 00:06:08.987 LIB libspdk_ut_mock.a 00:06:08.987 SO libspdk_ut.so.2.0 00:06:08.987 SO libspdk_log.so.7.1 00:06:08.987 SO libspdk_ut_mock.so.6.0 00:06:08.987 SYMLINK libspdk_ut.so 00:06:08.987 SYMLINK libspdk_ut_mock.so 00:06:08.987 SYMLINK libspdk_log.so 00:06:08.987 CC lib/util/base64.o 00:06:08.987 CC lib/util/bit_array.o 00:06:08.987 CC lib/dma/dma.o 00:06:08.987 CC lib/util/crc16.o 00:06:08.987 CC lib/util/crc32.o 00:06:08.987 CC lib/util/crc32c.o 00:06:08.987 CC lib/ioat/ioat.o 00:06:08.987 CC lib/util/cpuset.o 00:06:08.987 CXX lib/trace_parser/trace.o 00:06:08.987 CC lib/util/crc32_ieee.o 00:06:08.987 CC lib/vfio_user/host/vfio_user_pci.o 00:06:08.987 CC lib/util/crc64.o 00:06:08.987 CC lib/util/dif.o 00:06:08.987 CC lib/vfio_user/host/vfio_user.o 00:06:08.987 LIB libspdk_dma.a 00:06:08.987 CC lib/util/fd.o 00:06:08.987 SO libspdk_dma.so.5.0 00:06:08.987 CC lib/util/fd_group.o 00:06:08.987 CC lib/util/file.o 00:06:08.987 CC lib/util/hexlify.o 00:06:08.987 LIB libspdk_ioat.a 00:06:08.987 SYMLINK libspdk_dma.so 00:06:08.987 CC lib/util/iov.o 00:06:08.987 SO libspdk_ioat.so.7.0 00:06:08.987 CC lib/util/math.o 00:06:08.987 LIB libspdk_vfio_user.a 00:06:09.246 SYMLINK libspdk_ioat.so 00:06:09.246 CC lib/util/net.o 00:06:09.246 CC lib/util/pipe.o 00:06:09.246 SO libspdk_vfio_user.so.5.0 00:06:09.246 CC lib/util/strerror_tls.o 00:06:09.246 CC lib/util/string.o 00:06:09.246 SYMLINK libspdk_vfio_user.so 00:06:09.246 CC lib/util/uuid.o 00:06:09.246 CC lib/util/xor.o 00:06:09.246 CC lib/util/zipf.o 00:06:09.246 CC lib/util/md5.o 00:06:09.503 LIB libspdk_util.a 00:06:09.503 SO libspdk_util.so.10.1 00:06:09.762 SYMLINK libspdk_util.so 00:06:09.762 LIB libspdk_trace_parser.a 00:06:09.762 SO libspdk_trace_parser.so.6.0 00:06:10.020 SYMLINK libspdk_trace_parser.so 00:06:10.020 CC lib/idxd/idxd.o 00:06:10.020 CC lib/idxd/idxd_user.o 00:06:10.020 CC lib/idxd/idxd_kernel.o 00:06:10.020 CC lib/vmd/vmd.o 00:06:10.020 CC lib/vmd/led.o 00:06:10.020 CC lib/conf/conf.o 00:06:10.020 CC lib/rdma_utils/rdma_utils.o 00:06:10.020 CC lib/env_dpdk/memory.o 00:06:10.020 CC lib/env_dpdk/env.o 00:06:10.020 CC lib/json/json_parse.o 00:06:10.020 CC lib/json/json_util.o 00:06:10.278 CC lib/json/json_write.o 00:06:10.278 CC lib/env_dpdk/pci.o 00:06:10.278 LIB libspdk_conf.a 00:06:10.278 CC lib/env_dpdk/init.o 00:06:10.278 SO libspdk_conf.so.6.0 00:06:10.278 LIB libspdk_rdma_utils.a 00:06:10.278 SYMLINK libspdk_conf.so 00:06:10.278 SO libspdk_rdma_utils.so.1.0 00:06:10.278 CC lib/env_dpdk/threads.o 00:06:10.278 CC lib/env_dpdk/pci_ioat.o 00:06:10.278 SYMLINK libspdk_rdma_utils.so 00:06:10.536 CC lib/env_dpdk/pci_virtio.o 00:06:10.536 LIB libspdk_json.a 00:06:10.536 CC lib/env_dpdk/pci_vmd.o 00:06:10.536 SO libspdk_json.so.6.0 00:06:10.536 CC lib/env_dpdk/pci_idxd.o 00:06:10.536 LIB libspdk_idxd.a 00:06:10.536 SYMLINK libspdk_json.so 00:06:10.536 CC lib/env_dpdk/pci_event.o 00:06:10.536 CC lib/env_dpdk/sigbus_handler.o 00:06:10.536 SO libspdk_idxd.so.12.1 00:06:10.536 CC lib/env_dpdk/pci_dpdk.o 00:06:10.536 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:10.795 SYMLINK libspdk_idxd.so 00:06:10.795 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:10.795 LIB libspdk_vmd.a 00:06:10.795 SO libspdk_vmd.so.6.0 00:06:10.795 SYMLINK libspdk_vmd.so 00:06:10.795 CC lib/rdma_provider/common.o 00:06:10.795 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:11.054 CC lib/jsonrpc/jsonrpc_server.o 00:06:11.054 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:11.054 CC lib/jsonrpc/jsonrpc_client.o 00:06:11.054 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:11.054 LIB libspdk_rdma_provider.a 00:06:11.054 SO libspdk_rdma_provider.so.7.0 00:06:11.313 LIB libspdk_jsonrpc.a 00:06:11.313 SYMLINK libspdk_rdma_provider.so 00:06:11.313 SO libspdk_jsonrpc.so.6.0 00:06:11.313 SYMLINK libspdk_jsonrpc.so 00:06:11.313 LIB libspdk_env_dpdk.a 00:06:11.571 SO libspdk_env_dpdk.so.15.1 00:06:11.571 SYMLINK libspdk_env_dpdk.so 00:06:11.830 CC lib/rpc/rpc.o 00:06:12.090 LIB libspdk_rpc.a 00:06:12.090 SO libspdk_rpc.so.6.0 00:06:12.090 SYMLINK libspdk_rpc.so 00:06:12.669 CC lib/keyring/keyring.o 00:06:12.669 CC lib/keyring/keyring_rpc.o 00:06:12.669 CC lib/notify/notify.o 00:06:12.669 CC lib/notify/notify_rpc.o 00:06:12.669 CC lib/trace/trace.o 00:06:12.669 CC lib/trace/trace_rpc.o 00:06:12.669 CC lib/trace/trace_flags.o 00:06:12.669 LIB libspdk_notify.a 00:06:12.669 LIB libspdk_keyring.a 00:06:12.669 SO libspdk_notify.so.6.0 00:06:12.928 SO libspdk_keyring.so.2.0 00:06:12.928 LIB libspdk_trace.a 00:06:12.928 SYMLINK libspdk_notify.so 00:06:12.928 SO libspdk_trace.so.11.0 00:06:12.928 SYMLINK libspdk_keyring.so 00:06:12.928 SYMLINK libspdk_trace.so 00:06:13.494 CC lib/sock/sock.o 00:06:13.494 CC lib/sock/sock_rpc.o 00:06:13.494 CC lib/thread/iobuf.o 00:06:13.494 CC lib/thread/thread.o 00:06:13.752 LIB libspdk_sock.a 00:06:13.752 SO libspdk_sock.so.10.0 00:06:14.010 SYMLINK libspdk_sock.so 00:06:14.301 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:14.301 CC lib/nvme/nvme_ns_cmd.o 00:06:14.301 CC lib/nvme/nvme_ctrlr.o 00:06:14.301 CC lib/nvme/nvme_fabric.o 00:06:14.301 CC lib/nvme/nvme_ns.o 00:06:14.301 CC lib/nvme/nvme_qpair.o 00:06:14.301 CC lib/nvme/nvme_pcie_common.o 00:06:14.301 CC lib/nvme/nvme.o 00:06:14.301 CC lib/nvme/nvme_pcie.o 00:06:15.233 CC lib/nvme/nvme_quirks.o 00:06:15.233 CC lib/nvme/nvme_transport.o 00:06:15.233 CC lib/nvme/nvme_discovery.o 00:06:15.233 LIB libspdk_thread.a 00:06:15.233 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:15.233 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:15.233 SO libspdk_thread.so.11.0 00:06:15.233 CC lib/nvme/nvme_tcp.o 00:06:15.233 SYMLINK libspdk_thread.so 00:06:15.233 CC lib/nvme/nvme_opal.o 00:06:15.491 CC lib/accel/accel.o 00:06:15.491 CC lib/accel/accel_rpc.o 00:06:15.491 CC lib/nvme/nvme_io_msg.o 00:06:15.749 CC lib/accel/accel_sw.o 00:06:15.749 CC lib/nvme/nvme_poll_group.o 00:06:16.035 CC lib/blob/blobstore.o 00:06:16.035 CC lib/init/json_config.o 00:06:16.035 CC lib/virtio/virtio.o 00:06:16.035 CC lib/fsdev/fsdev.o 00:06:16.035 CC lib/fsdev/fsdev_io.o 00:06:16.294 CC lib/init/subsystem.o 00:06:16.294 CC lib/fsdev/fsdev_rpc.o 00:06:16.294 CC lib/virtio/virtio_vhost_user.o 00:06:16.294 CC lib/virtio/virtio_vfio_user.o 00:06:16.294 CC lib/init/subsystem_rpc.o 00:06:16.551 CC lib/nvme/nvme_zns.o 00:06:16.551 CC lib/virtio/virtio_pci.o 00:06:16.551 CC lib/init/rpc.o 00:06:16.551 LIB libspdk_accel.a 00:06:16.551 CC lib/nvme/nvme_stubs.o 00:06:16.551 SO libspdk_accel.so.16.0 00:06:16.551 CC lib/nvme/nvme_auth.o 00:06:16.551 LIB libspdk_fsdev.a 00:06:16.551 CC lib/nvme/nvme_cuse.o 00:06:16.811 SYMLINK libspdk_accel.so 00:06:16.811 LIB libspdk_init.a 00:06:16.811 CC lib/nvme/nvme_rdma.o 00:06:16.811 LIB libspdk_virtio.a 00:06:16.811 SO libspdk_fsdev.so.2.0 00:06:16.811 SO libspdk_init.so.6.0 00:06:16.811 SO libspdk_virtio.so.7.0 00:06:16.811 SYMLINK libspdk_fsdev.so 00:06:16.811 CC lib/blob/request.o 00:06:16.811 SYMLINK libspdk_init.so 00:06:16.811 CC lib/blob/zeroes.o 00:06:16.811 SYMLINK libspdk_virtio.so 00:06:16.811 CC lib/bdev/bdev.o 00:06:17.069 CC lib/blob/blob_bs_dev.o 00:06:17.069 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:17.069 CC lib/bdev/bdev_rpc.o 00:06:17.327 CC lib/event/app.o 00:06:17.327 CC lib/event/reactor.o 00:06:17.327 CC lib/event/log_rpc.o 00:06:17.327 CC lib/event/app_rpc.o 00:06:17.586 CC lib/event/scheduler_static.o 00:06:17.586 CC lib/bdev/bdev_zone.o 00:06:17.586 CC lib/bdev/part.o 00:06:17.586 CC lib/bdev/scsi_nvme.o 00:06:17.586 LIB libspdk_fuse_dispatcher.a 00:06:17.586 SO libspdk_fuse_dispatcher.so.1.0 00:06:17.586 LIB libspdk_event.a 00:06:17.845 SYMLINK libspdk_fuse_dispatcher.so 00:06:17.845 SO libspdk_event.so.14.0 00:06:17.845 SYMLINK libspdk_event.so 00:06:18.112 LIB libspdk_nvme.a 00:06:18.384 SO libspdk_nvme.so.15.0 00:06:18.643 SYMLINK libspdk_nvme.so 00:06:18.902 LIB libspdk_blob.a 00:06:19.160 SO libspdk_blob.so.11.0 00:06:19.160 SYMLINK libspdk_blob.so 00:06:19.419 LIB libspdk_bdev.a 00:06:19.677 CC lib/blobfs/blobfs.o 00:06:19.677 CC lib/lvol/lvol.o 00:06:19.677 CC lib/blobfs/tree.o 00:06:19.677 SO libspdk_bdev.so.17.0 00:06:19.677 SYMLINK libspdk_bdev.so 00:06:19.936 CC lib/nvmf/ctrlr.o 00:06:19.936 CC lib/nvmf/ctrlr_bdev.o 00:06:19.936 CC lib/nbd/nbd.o 00:06:19.936 CC lib/nvmf/ctrlr_discovery.o 00:06:19.936 CC lib/nvmf/subsystem.o 00:06:19.936 CC lib/ftl/ftl_core.o 00:06:19.936 CC lib/ublk/ublk.o 00:06:19.936 CC lib/scsi/dev.o 00:06:20.194 CC lib/scsi/lun.o 00:06:20.453 CC lib/nbd/nbd_rpc.o 00:06:20.453 CC lib/ftl/ftl_init.o 00:06:20.453 LIB libspdk_blobfs.a 00:06:20.453 SO libspdk_blobfs.so.10.0 00:06:20.453 LIB libspdk_lvol.a 00:06:20.453 CC lib/ftl/ftl_layout.o 00:06:20.453 SYMLINK libspdk_blobfs.so 00:06:20.453 CC lib/ftl/ftl_debug.o 00:06:20.453 SO libspdk_lvol.so.10.0 00:06:20.713 LIB libspdk_nbd.a 00:06:20.713 CC lib/scsi/port.o 00:06:20.713 SO libspdk_nbd.so.7.0 00:06:20.713 SYMLINK libspdk_lvol.so 00:06:20.713 CC lib/scsi/scsi.o 00:06:20.713 CC lib/ublk/ublk_rpc.o 00:06:20.713 CC lib/scsi/scsi_bdev.o 00:06:20.713 SYMLINK libspdk_nbd.so 00:06:20.713 CC lib/ftl/ftl_io.o 00:06:20.713 CC lib/scsi/scsi_pr.o 00:06:20.713 CC lib/ftl/ftl_sb.o 00:06:20.713 CC lib/ftl/ftl_l2p.o 00:06:20.713 CC lib/scsi/scsi_rpc.o 00:06:20.972 LIB libspdk_ublk.a 00:06:20.972 CC lib/scsi/task.o 00:06:20.972 SO libspdk_ublk.so.3.0 00:06:20.972 CC lib/nvmf/nvmf.o 00:06:20.972 SYMLINK libspdk_ublk.so 00:06:20.972 CC lib/nvmf/nvmf_rpc.o 00:06:20.972 CC lib/ftl/ftl_l2p_flat.o 00:06:20.972 CC lib/nvmf/transport.o 00:06:20.972 CC lib/ftl/ftl_nv_cache.o 00:06:20.972 CC lib/ftl/ftl_band.o 00:06:21.231 CC lib/nvmf/tcp.o 00:06:21.231 LIB libspdk_scsi.a 00:06:21.231 CC lib/nvmf/stubs.o 00:06:21.231 CC lib/nvmf/mdns_server.o 00:06:21.231 SO libspdk_scsi.so.9.0 00:06:21.491 SYMLINK libspdk_scsi.so 00:06:21.491 CC lib/ftl/ftl_band_ops.o 00:06:21.491 CC lib/ftl/ftl_writer.o 00:06:21.750 CC lib/nvmf/rdma.o 00:06:21.750 CC lib/ftl/ftl_rq.o 00:06:21.750 CC lib/nvmf/auth.o 00:06:21.750 CC lib/ftl/ftl_reloc.o 00:06:21.750 CC lib/ftl/ftl_l2p_cache.o 00:06:22.009 CC lib/ftl/ftl_p2l.o 00:06:22.009 CC lib/ftl/ftl_p2l_log.o 00:06:22.009 CC lib/iscsi/conn.o 00:06:22.009 CC lib/iscsi/init_grp.o 00:06:22.009 CC lib/vhost/vhost.o 00:06:22.268 CC lib/iscsi/iscsi.o 00:06:22.268 CC lib/ftl/mngt/ftl_mngt.o 00:06:22.268 CC lib/vhost/vhost_rpc.o 00:06:22.268 CC lib/vhost/vhost_scsi.o 00:06:22.268 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:22.834 CC lib/iscsi/param.o 00:06:22.834 CC lib/vhost/vhost_blk.o 00:06:22.834 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:22.834 CC lib/vhost/rte_vhost_user.o 00:06:22.834 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:22.834 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:22.834 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:23.092 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:23.092 CC lib/iscsi/portal_grp.o 00:06:23.093 CC lib/iscsi/tgt_node.o 00:06:23.093 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:23.352 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:23.352 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:23.352 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:23.352 CC lib/iscsi/iscsi_subsystem.o 00:06:23.352 CC lib/iscsi/iscsi_rpc.o 00:06:23.648 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:23.648 CC lib/iscsi/task.o 00:06:23.648 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:23.648 CC lib/ftl/utils/ftl_conf.o 00:06:23.648 CC lib/ftl/utils/ftl_md.o 00:06:23.648 CC lib/ftl/utils/ftl_mempool.o 00:06:23.906 CC lib/ftl/utils/ftl_bitmap.o 00:06:23.906 CC lib/ftl/utils/ftl_property.o 00:06:23.906 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:23.906 LIB libspdk_iscsi.a 00:06:23.906 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:23.906 LIB libspdk_vhost.a 00:06:23.906 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:23.906 SO libspdk_iscsi.so.8.0 00:06:23.906 SO libspdk_vhost.so.8.0 00:06:23.906 LIB libspdk_nvmf.a 00:06:24.163 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:24.163 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:24.163 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:24.163 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:24.163 SYMLINK libspdk_vhost.so 00:06:24.163 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:24.163 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:24.163 SYMLINK libspdk_iscsi.so 00:06:24.163 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:24.163 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:24.163 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:24.163 SO libspdk_nvmf.so.20.0 00:06:24.163 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:24.420 CC lib/ftl/base/ftl_base_dev.o 00:06:24.420 CC lib/ftl/base/ftl_base_bdev.o 00:06:24.420 CC lib/ftl/ftl_trace.o 00:06:24.420 SYMLINK libspdk_nvmf.so 00:06:24.678 LIB libspdk_ftl.a 00:06:24.936 SO libspdk_ftl.so.9.0 00:06:25.194 SYMLINK libspdk_ftl.so 00:06:25.760 CC module/env_dpdk/env_dpdk_rpc.o 00:06:25.760 CC module/keyring/file/keyring.o 00:06:25.760 CC module/scheduler/gscheduler/gscheduler.o 00:06:25.760 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:25.760 CC module/sock/posix/posix.o 00:06:25.760 CC module/accel/error/accel_error.o 00:06:25.760 CC module/blob/bdev/blob_bdev.o 00:06:25.760 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:25.760 CC module/keyring/linux/keyring.o 00:06:25.760 CC module/fsdev/aio/fsdev_aio.o 00:06:25.760 LIB libspdk_env_dpdk_rpc.a 00:06:25.760 SO libspdk_env_dpdk_rpc.so.6.0 00:06:26.020 CC module/keyring/linux/keyring_rpc.o 00:06:26.020 LIB libspdk_scheduler_gscheduler.a 00:06:26.020 SYMLINK libspdk_env_dpdk_rpc.so 00:06:26.020 CC module/accel/error/accel_error_rpc.o 00:06:26.020 CC module/keyring/file/keyring_rpc.o 00:06:26.020 LIB libspdk_scheduler_dpdk_governor.a 00:06:26.020 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:26.020 SO libspdk_scheduler_gscheduler.so.4.0 00:06:26.020 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:26.020 LIB libspdk_scheduler_dynamic.a 00:06:26.020 SO libspdk_scheduler_dynamic.so.4.0 00:06:26.020 SYMLINK libspdk_scheduler_gscheduler.so 00:06:26.020 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:26.020 LIB libspdk_blob_bdev.a 00:06:26.020 SYMLINK libspdk_scheduler_dynamic.so 00:06:26.020 CC module/fsdev/aio/linux_aio_mgr.o 00:06:26.020 LIB libspdk_keyring_linux.a 00:06:26.020 LIB libspdk_keyring_file.a 00:06:26.020 SO libspdk_blob_bdev.so.11.0 00:06:26.020 LIB libspdk_accel_error.a 00:06:26.020 SO libspdk_keyring_linux.so.1.0 00:06:26.020 SO libspdk_keyring_file.so.2.0 00:06:26.278 SO libspdk_accel_error.so.2.0 00:06:26.278 SYMLINK libspdk_blob_bdev.so 00:06:26.278 SYMLINK libspdk_keyring_linux.so 00:06:26.278 SYMLINK libspdk_keyring_file.so 00:06:26.278 SYMLINK libspdk_accel_error.so 00:06:26.278 CC module/accel/dsa/accel_dsa.o 00:06:26.278 CC module/accel/dsa/accel_dsa_rpc.o 00:06:26.278 CC module/accel/ioat/accel_ioat.o 00:06:26.278 CC module/accel/ioat/accel_ioat_rpc.o 00:06:26.278 CC module/accel/iaa/accel_iaa.o 00:06:26.536 CC module/accel/iaa/accel_iaa_rpc.o 00:06:26.536 LIB libspdk_accel_ioat.a 00:06:26.536 LIB libspdk_fsdev_aio.a 00:06:26.536 SO libspdk_accel_ioat.so.6.0 00:06:26.536 CC module/bdev/delay/vbdev_delay.o 00:06:26.536 CC module/blobfs/bdev/blobfs_bdev.o 00:06:26.536 CC module/bdev/error/vbdev_error.o 00:06:26.536 SO libspdk_fsdev_aio.so.1.0 00:06:26.536 LIB libspdk_sock_posix.a 00:06:26.536 LIB libspdk_accel_dsa.a 00:06:26.536 SYMLINK libspdk_accel_ioat.so 00:06:26.536 SO libspdk_accel_dsa.so.5.0 00:06:26.536 SO libspdk_sock_posix.so.6.0 00:06:26.536 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:26.536 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:26.536 LIB libspdk_accel_iaa.a 00:06:26.536 CC module/bdev/gpt/gpt.o 00:06:26.536 SYMLINK libspdk_fsdev_aio.so 00:06:26.794 CC module/bdev/gpt/vbdev_gpt.o 00:06:26.794 SO libspdk_accel_iaa.so.3.0 00:06:26.794 SYMLINK libspdk_accel_dsa.so 00:06:26.794 SYMLINK libspdk_sock_posix.so 00:06:26.794 CC module/bdev/error/vbdev_error_rpc.o 00:06:26.794 SYMLINK libspdk_accel_iaa.so 00:06:26.794 LIB libspdk_blobfs_bdev.a 00:06:26.794 SO libspdk_blobfs_bdev.so.6.0 00:06:26.794 CC module/bdev/lvol/vbdev_lvol.o 00:06:26.794 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:26.794 LIB libspdk_bdev_error.a 00:06:27.052 CC module/bdev/malloc/bdev_malloc.o 00:06:27.052 LIB libspdk_bdev_delay.a 00:06:27.052 CC module/bdev/null/bdev_null.o 00:06:27.052 SYMLINK libspdk_blobfs_bdev.so 00:06:27.052 CC module/bdev/null/bdev_null_rpc.o 00:06:27.052 SO libspdk_bdev_error.so.6.0 00:06:27.052 SO libspdk_bdev_delay.so.6.0 00:06:27.052 LIB libspdk_bdev_gpt.a 00:06:27.052 CC module/bdev/nvme/bdev_nvme.o 00:06:27.052 SO libspdk_bdev_gpt.so.6.0 00:06:27.052 CC module/bdev/passthru/vbdev_passthru.o 00:06:27.052 SYMLINK libspdk_bdev_delay.so 00:06:27.052 SYMLINK libspdk_bdev_error.so 00:06:27.052 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:27.052 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:27.052 SYMLINK libspdk_bdev_gpt.so 00:06:27.315 LIB libspdk_bdev_null.a 00:06:27.315 SO libspdk_bdev_null.so.6.0 00:06:27.315 CC module/bdev/raid/bdev_raid.o 00:06:27.315 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:27.315 LIB libspdk_bdev_passthru.a 00:06:27.315 SYMLINK libspdk_bdev_null.so 00:06:27.315 CC module/bdev/split/vbdev_split.o 00:06:27.315 CC module/bdev/split/vbdev_split_rpc.o 00:06:27.315 SO libspdk_bdev_passthru.so.6.0 00:06:27.315 LIB libspdk_bdev_lvol.a 00:06:27.315 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:27.315 SO libspdk_bdev_lvol.so.6.0 00:06:27.575 SYMLINK libspdk_bdev_passthru.so 00:06:27.575 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:27.575 CC module/bdev/aio/bdev_aio.o 00:06:27.575 LIB libspdk_bdev_malloc.a 00:06:27.575 SYMLINK libspdk_bdev_lvol.so 00:06:27.575 CC module/bdev/aio/bdev_aio_rpc.o 00:06:27.575 SO libspdk_bdev_malloc.so.6.0 00:06:27.575 LIB libspdk_bdev_split.a 00:06:27.575 SYMLINK libspdk_bdev_malloc.so 00:06:27.575 SO libspdk_bdev_split.so.6.0 00:06:27.575 CC module/bdev/raid/bdev_raid_rpc.o 00:06:27.575 CC module/bdev/raid/bdev_raid_sb.o 00:06:27.860 CC module/bdev/ftl/bdev_ftl.o 00:06:27.860 SYMLINK libspdk_bdev_split.so 00:06:27.860 CC module/bdev/nvme/nvme_rpc.o 00:06:27.860 LIB libspdk_bdev_zone_block.a 00:06:27.860 CC module/bdev/iscsi/bdev_iscsi.o 00:06:27.860 SO libspdk_bdev_zone_block.so.6.0 00:06:27.860 LIB libspdk_bdev_aio.a 00:06:27.860 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:27.860 SO libspdk_bdev_aio.so.6.0 00:06:27.860 SYMLINK libspdk_bdev_zone_block.so 00:06:27.860 CC module/bdev/nvme/bdev_mdns_client.o 00:06:27.860 CC module/bdev/nvme/vbdev_opal.o 00:06:27.860 SYMLINK libspdk_bdev_aio.so 00:06:27.860 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:27.860 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:28.117 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:28.117 CC module/bdev/raid/raid0.o 00:06:28.117 LIB libspdk_bdev_ftl.a 00:06:28.117 CC module/bdev/raid/raid1.o 00:06:28.117 LIB libspdk_bdev_iscsi.a 00:06:28.117 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:28.117 SO libspdk_bdev_ftl.so.6.0 00:06:28.117 CC module/bdev/raid/concat.o 00:06:28.117 SO libspdk_bdev_iscsi.so.6.0 00:06:28.375 SYMLINK libspdk_bdev_ftl.so 00:06:28.375 SYMLINK libspdk_bdev_iscsi.so 00:06:28.375 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:28.375 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:28.634 LIB libspdk_bdev_raid.a 00:06:28.634 LIB libspdk_bdev_virtio.a 00:06:28.634 SO libspdk_bdev_raid.so.6.0 00:06:28.634 SO libspdk_bdev_virtio.so.6.0 00:06:28.634 SYMLINK libspdk_bdev_virtio.so 00:06:28.634 SYMLINK libspdk_bdev_raid.so 00:06:29.569 LIB libspdk_bdev_nvme.a 00:06:29.569 SO libspdk_bdev_nvme.so.7.1 00:06:29.569 SYMLINK libspdk_bdev_nvme.so 00:06:30.504 CC module/event/subsystems/sock/sock.o 00:06:30.504 CC module/event/subsystems/vmd/vmd.o 00:06:30.504 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:30.504 CC module/event/subsystems/scheduler/scheduler.o 00:06:30.504 CC module/event/subsystems/keyring/keyring.o 00:06:30.504 CC module/event/subsystems/iobuf/iobuf.o 00:06:30.504 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:30.504 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:30.504 CC module/event/subsystems/fsdev/fsdev.o 00:06:30.504 LIB libspdk_event_sock.a 00:06:30.504 LIB libspdk_event_keyring.a 00:06:30.504 LIB libspdk_event_vmd.a 00:06:30.504 LIB libspdk_event_scheduler.a 00:06:30.504 LIB libspdk_event_iobuf.a 00:06:30.504 LIB libspdk_event_vhost_blk.a 00:06:30.504 SO libspdk_event_keyring.so.1.0 00:06:30.504 SO libspdk_event_sock.so.5.0 00:06:30.504 LIB libspdk_event_fsdev.a 00:06:30.504 SO libspdk_event_vmd.so.6.0 00:06:30.504 SO libspdk_event_scheduler.so.4.0 00:06:30.504 SO libspdk_event_iobuf.so.3.0 00:06:30.504 SO libspdk_event_fsdev.so.1.0 00:06:30.504 SO libspdk_event_vhost_blk.so.3.0 00:06:30.504 SYMLINK libspdk_event_keyring.so 00:06:30.504 SYMLINK libspdk_event_sock.so 00:06:30.504 SYMLINK libspdk_event_scheduler.so 00:06:30.504 SYMLINK libspdk_event_vmd.so 00:06:30.504 SYMLINK libspdk_event_fsdev.so 00:06:30.504 SYMLINK libspdk_event_iobuf.so 00:06:30.504 SYMLINK libspdk_event_vhost_blk.so 00:06:31.073 CC module/event/subsystems/accel/accel.o 00:06:31.073 LIB libspdk_event_accel.a 00:06:31.332 SO libspdk_event_accel.so.6.0 00:06:31.332 SYMLINK libspdk_event_accel.so 00:06:31.899 CC module/event/subsystems/bdev/bdev.o 00:06:31.899 LIB libspdk_event_bdev.a 00:06:31.899 SO libspdk_event_bdev.so.6.0 00:06:32.158 SYMLINK libspdk_event_bdev.so 00:06:32.416 CC module/event/subsystems/nbd/nbd.o 00:06:32.416 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:32.416 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:32.416 CC module/event/subsystems/scsi/scsi.o 00:06:32.416 CC module/event/subsystems/ublk/ublk.o 00:06:32.416 LIB libspdk_event_nbd.a 00:06:32.675 LIB libspdk_event_ublk.a 00:06:32.675 LIB libspdk_event_scsi.a 00:06:32.675 SO libspdk_event_nbd.so.6.0 00:06:32.675 SO libspdk_event_ublk.so.3.0 00:06:32.675 SO libspdk_event_scsi.so.6.0 00:06:32.675 SYMLINK libspdk_event_nbd.so 00:06:32.675 LIB libspdk_event_nvmf.a 00:06:32.675 SYMLINK libspdk_event_ublk.so 00:06:32.675 SYMLINK libspdk_event_scsi.so 00:06:32.675 SO libspdk_event_nvmf.so.6.0 00:06:32.675 SYMLINK libspdk_event_nvmf.so 00:06:33.242 CC module/event/subsystems/iscsi/iscsi.o 00:06:33.243 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:33.243 LIB libspdk_event_iscsi.a 00:06:33.243 LIB libspdk_event_vhost_scsi.a 00:06:33.243 SO libspdk_event_iscsi.so.6.0 00:06:33.243 SO libspdk_event_vhost_scsi.so.3.0 00:06:33.243 SYMLINK libspdk_event_iscsi.so 00:06:33.243 SYMLINK libspdk_event_vhost_scsi.so 00:06:33.502 SO libspdk.so.6.0 00:06:33.502 SYMLINK libspdk.so 00:06:34.069 CC app/spdk_nvme_perf/perf.o 00:06:34.069 CXX app/trace/trace.o 00:06:34.069 CC app/trace_record/trace_record.o 00:06:34.069 CC app/spdk_lspci/spdk_lspci.o 00:06:34.069 CC app/spdk_nvme_identify/identify.o 00:06:34.069 CC app/spdk_tgt/spdk_tgt.o 00:06:34.069 CC app/iscsi_tgt/iscsi_tgt.o 00:06:34.069 CC app/nvmf_tgt/nvmf_main.o 00:06:34.069 CC test/thread/poller_perf/poller_perf.o 00:06:34.069 CC examples/util/zipf/zipf.o 00:06:34.069 LINK spdk_lspci 00:06:34.069 LINK spdk_trace_record 00:06:34.069 LINK poller_perf 00:06:34.069 LINK spdk_tgt 00:06:34.069 LINK nvmf_tgt 00:06:34.069 LINK iscsi_tgt 00:06:34.069 LINK zipf 00:06:34.328 LINK spdk_trace 00:06:34.586 CC app/spdk_nvme_discover/discovery_aer.o 00:06:34.586 CC examples/ioat/perf/perf.o 00:06:34.586 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:34.586 CC test/dma/test_dma/test_dma.o 00:06:34.586 LINK spdk_nvme_discover 00:06:34.586 CC examples/thread/thread/thread_ex.o 00:06:34.586 TEST_HEADER include/spdk/accel.h 00:06:34.586 TEST_HEADER include/spdk/accel_module.h 00:06:34.586 TEST_HEADER include/spdk/assert.h 00:06:34.844 TEST_HEADER include/spdk/barrier.h 00:06:34.844 TEST_HEADER include/spdk/base64.h 00:06:34.844 TEST_HEADER include/spdk/bdev.h 00:06:34.844 TEST_HEADER include/spdk/bdev_module.h 00:06:34.844 TEST_HEADER include/spdk/bdev_zone.h 00:06:34.844 CC test/app/bdev_svc/bdev_svc.o 00:06:34.844 TEST_HEADER include/spdk/bit_array.h 00:06:34.844 TEST_HEADER include/spdk/bit_pool.h 00:06:34.844 LINK spdk_nvme_perf 00:06:34.844 TEST_HEADER include/spdk/blob_bdev.h 00:06:34.844 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:34.844 TEST_HEADER include/spdk/blobfs.h 00:06:34.844 TEST_HEADER include/spdk/blob.h 00:06:34.844 TEST_HEADER include/spdk/conf.h 00:06:34.844 TEST_HEADER include/spdk/config.h 00:06:34.844 TEST_HEADER include/spdk/cpuset.h 00:06:34.844 TEST_HEADER include/spdk/crc16.h 00:06:34.844 TEST_HEADER include/spdk/crc32.h 00:06:34.844 TEST_HEADER include/spdk/crc64.h 00:06:34.844 TEST_HEADER include/spdk/dif.h 00:06:34.844 TEST_HEADER include/spdk/dma.h 00:06:34.844 TEST_HEADER include/spdk/endian.h 00:06:34.844 TEST_HEADER include/spdk/env_dpdk.h 00:06:34.844 TEST_HEADER include/spdk/env.h 00:06:34.844 TEST_HEADER include/spdk/event.h 00:06:34.844 TEST_HEADER include/spdk/fd_group.h 00:06:34.844 TEST_HEADER include/spdk/fd.h 00:06:34.844 TEST_HEADER include/spdk/file.h 00:06:34.844 TEST_HEADER include/spdk/fsdev.h 00:06:34.844 TEST_HEADER include/spdk/fsdev_module.h 00:06:34.844 LINK ioat_perf 00:06:34.844 TEST_HEADER include/spdk/ftl.h 00:06:34.844 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:34.844 TEST_HEADER include/spdk/gpt_spec.h 00:06:34.844 TEST_HEADER include/spdk/hexlify.h 00:06:34.844 TEST_HEADER include/spdk/histogram_data.h 00:06:34.844 TEST_HEADER include/spdk/idxd.h 00:06:34.844 TEST_HEADER include/spdk/idxd_spec.h 00:06:34.844 TEST_HEADER include/spdk/init.h 00:06:34.844 TEST_HEADER include/spdk/ioat.h 00:06:34.844 TEST_HEADER include/spdk/ioat_spec.h 00:06:34.844 LINK interrupt_tgt 00:06:34.844 TEST_HEADER include/spdk/iscsi_spec.h 00:06:34.844 TEST_HEADER include/spdk/json.h 00:06:34.844 TEST_HEADER include/spdk/jsonrpc.h 00:06:34.844 TEST_HEADER include/spdk/keyring.h 00:06:34.844 TEST_HEADER include/spdk/keyring_module.h 00:06:34.844 TEST_HEADER include/spdk/likely.h 00:06:34.844 TEST_HEADER include/spdk/log.h 00:06:34.844 TEST_HEADER include/spdk/lvol.h 00:06:34.844 TEST_HEADER include/spdk/md5.h 00:06:34.844 LINK spdk_nvme_identify 00:06:34.844 TEST_HEADER include/spdk/memory.h 00:06:34.844 TEST_HEADER include/spdk/mmio.h 00:06:34.844 TEST_HEADER include/spdk/nbd.h 00:06:34.844 TEST_HEADER include/spdk/net.h 00:06:34.844 TEST_HEADER include/spdk/notify.h 00:06:34.844 TEST_HEADER include/spdk/nvme.h 00:06:34.844 TEST_HEADER include/spdk/nvme_intel.h 00:06:34.844 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:34.844 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:34.844 TEST_HEADER include/spdk/nvme_spec.h 00:06:34.844 TEST_HEADER include/spdk/nvme_zns.h 00:06:34.844 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:34.844 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:34.844 TEST_HEADER include/spdk/nvmf.h 00:06:34.844 TEST_HEADER include/spdk/nvmf_spec.h 00:06:34.844 TEST_HEADER include/spdk/nvmf_transport.h 00:06:34.844 TEST_HEADER include/spdk/opal.h 00:06:34.844 TEST_HEADER include/spdk/opal_spec.h 00:06:34.844 TEST_HEADER include/spdk/pci_ids.h 00:06:34.844 TEST_HEADER include/spdk/pipe.h 00:06:34.844 TEST_HEADER include/spdk/queue.h 00:06:34.844 TEST_HEADER include/spdk/reduce.h 00:06:34.844 TEST_HEADER include/spdk/rpc.h 00:06:34.844 TEST_HEADER include/spdk/scheduler.h 00:06:34.844 TEST_HEADER include/spdk/scsi.h 00:06:34.844 TEST_HEADER include/spdk/scsi_spec.h 00:06:34.844 TEST_HEADER include/spdk/sock.h 00:06:34.844 TEST_HEADER include/spdk/stdinc.h 00:06:34.844 TEST_HEADER include/spdk/string.h 00:06:34.844 TEST_HEADER include/spdk/thread.h 00:06:34.844 TEST_HEADER include/spdk/trace.h 00:06:34.844 TEST_HEADER include/spdk/trace_parser.h 00:06:34.844 TEST_HEADER include/spdk/tree.h 00:06:34.844 TEST_HEADER include/spdk/ublk.h 00:06:34.844 TEST_HEADER include/spdk/util.h 00:06:34.844 TEST_HEADER include/spdk/uuid.h 00:06:34.844 TEST_HEADER include/spdk/version.h 00:06:34.844 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:34.844 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:34.844 TEST_HEADER include/spdk/vhost.h 00:06:34.844 TEST_HEADER include/spdk/vmd.h 00:06:34.844 LINK bdev_svc 00:06:34.844 TEST_HEADER include/spdk/xor.h 00:06:34.844 TEST_HEADER include/spdk/zipf.h 00:06:34.844 CXX test/cpp_headers/accel.o 00:06:35.102 LINK thread 00:06:35.102 CC examples/ioat/verify/verify.o 00:06:35.102 CC app/spdk_top/spdk_top.o 00:06:35.102 CXX test/cpp_headers/accel_module.o 00:06:35.102 CC examples/sock/hello_world/hello_sock.o 00:06:35.102 CC test/env/mem_callbacks/mem_callbacks.o 00:06:35.102 LINK test_dma 00:06:35.102 CC examples/vmd/lsvmd/lsvmd.o 00:06:35.360 CXX test/cpp_headers/assert.o 00:06:35.360 LINK verify 00:06:35.360 CC examples/vmd/led/led.o 00:06:35.360 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:35.360 LINK hello_sock 00:06:35.360 LINK lsvmd 00:06:35.619 CXX test/cpp_headers/barrier.o 00:06:35.619 LINK led 00:06:35.619 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:35.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:35.877 CXX test/cpp_headers/base64.o 00:06:35.877 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:35.877 LINK mem_callbacks 00:06:35.877 CC test/event/event_perf/event_perf.o 00:06:35.877 LINK nvme_fuzz 00:06:35.877 CC examples/idxd/perf/perf.o 00:06:35.877 CC test/event/reactor/reactor.o 00:06:35.877 CXX test/cpp_headers/bdev.o 00:06:36.134 LINK event_perf 00:06:36.134 LINK spdk_top 00:06:36.135 LINK reactor 00:06:36.135 CC test/env/vtophys/vtophys.o 00:06:36.135 CXX test/cpp_headers/bdev_module.o 00:06:36.135 CC test/event/reactor_perf/reactor_perf.o 00:06:36.393 LINK vhost_fuzz 00:06:36.393 LINK idxd_perf 00:06:36.393 LINK vtophys 00:06:36.393 CC test/app/histogram_perf/histogram_perf.o 00:06:36.393 LINK reactor_perf 00:06:36.393 CXX test/cpp_headers/bdev_zone.o 00:06:36.393 CC app/vhost/vhost.o 00:06:36.651 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:36.651 LINK histogram_perf 00:06:36.651 CC test/event/app_repeat/app_repeat.o 00:06:36.651 CXX test/cpp_headers/bit_array.o 00:06:36.651 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:36.651 CC test/nvme/aer/aer.o 00:06:36.651 LINK vhost 00:06:36.910 LINK app_repeat 00:06:36.910 CXX test/cpp_headers/bit_pool.o 00:06:36.910 LINK env_dpdk_post_init 00:06:36.910 LINK hello_fsdev 00:06:36.910 CC examples/accel/perf/accel_perf.o 00:06:36.910 LINK aer 00:06:36.910 CXX test/cpp_headers/blob_bdev.o 00:06:37.168 CC examples/blob/hello_world/hello_blob.o 00:06:37.168 CC app/spdk_dd/spdk_dd.o 00:06:37.168 CC test/env/memory/memory_ut.o 00:06:37.168 CC test/event/scheduler/scheduler.o 00:06:37.168 CXX test/cpp_headers/blobfs_bdev.o 00:06:37.426 LINK iscsi_fuzz 00:06:37.426 CC test/nvme/reset/reset.o 00:06:37.426 LINK hello_blob 00:06:37.426 LINK accel_perf 00:06:37.426 CC app/fio/nvme/fio_plugin.o 00:06:37.426 CXX test/cpp_headers/blobfs.o 00:06:37.684 LINK scheduler 00:06:37.684 CXX test/cpp_headers/blob.o 00:06:37.684 LINK spdk_dd 00:06:37.684 LINK reset 00:06:37.942 CC examples/blob/cli/blobcli.o 00:06:37.942 CXX test/cpp_headers/conf.o 00:06:37.942 CC test/app/jsoncat/jsoncat.o 00:06:37.942 CC test/rpc_client/rpc_client_test.o 00:06:37.942 CXX test/cpp_headers/config.o 00:06:37.942 LINK jsoncat 00:06:37.942 CXX test/cpp_headers/cpuset.o 00:06:37.942 CC test/nvme/sgl/sgl.o 00:06:37.942 LINK rpc_client_test 00:06:38.199 CC test/nvme/e2edp/nvme_dp.o 00:06:38.199 CC test/nvme/overhead/overhead.o 00:06:38.199 LINK spdk_nvme 00:06:38.199 CXX test/cpp_headers/crc16.o 00:06:38.199 CXX test/cpp_headers/crc32.o 00:06:38.458 LINK sgl 00:06:38.458 LINK blobcli 00:06:38.458 CC test/app/stub/stub.o 00:06:38.458 CC app/fio/bdev/fio_plugin.o 00:06:38.458 LINK memory_ut 00:06:38.458 LINK overhead 00:06:38.458 LINK nvme_dp 00:06:38.717 CXX test/cpp_headers/crc64.o 00:06:38.717 LINK stub 00:06:38.717 CXX test/cpp_headers/dif.o 00:06:38.976 CC examples/nvme/hello_world/hello_world.o 00:06:38.976 CC test/nvme/err_injection/err_injection.o 00:06:38.976 CC test/env/pci/pci_ut.o 00:06:38.976 CC test/nvme/startup/startup.o 00:06:38.976 CXX test/cpp_headers/dma.o 00:06:39.238 CC test/nvme/reserve/reserve.o 00:06:39.238 LINK spdk_bdev 00:06:39.238 CC test/accel/dif/dif.o 00:06:39.238 CC examples/bdev/hello_world/hello_bdev.o 00:06:39.238 LINK startup 00:06:39.238 LINK hello_world 00:06:39.238 LINK err_injection 00:06:39.238 CXX test/cpp_headers/endian.o 00:06:39.501 LINK reserve 00:06:39.501 LINK pci_ut 00:06:39.501 CC test/nvme/simple_copy/simple_copy.o 00:06:39.501 CXX test/cpp_headers/env_dpdk.o 00:06:39.501 LINK hello_bdev 00:06:39.759 CC test/nvme/connect_stress/connect_stress.o 00:06:39.759 CXX test/cpp_headers/env.o 00:06:39.759 CC examples/nvme/reconnect/reconnect.o 00:06:39.759 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:39.759 LINK simple_copy 00:06:39.759 CC test/nvme/boot_partition/boot_partition.o 00:06:40.017 LINK connect_stress 00:06:40.017 LINK dif 00:06:40.017 CXX test/cpp_headers/event.o 00:06:40.017 CC examples/bdev/bdevperf/bdevperf.o 00:06:40.017 LINK boot_partition 00:06:40.017 LINK reconnect 00:06:40.017 CXX test/cpp_headers/fd_group.o 00:06:40.017 CC test/blobfs/mkfs/mkfs.o 00:06:40.275 CC test/nvme/compliance/nvme_compliance.o 00:06:40.275 LINK nvme_manage 00:06:40.275 CC test/nvme/fused_ordering/fused_ordering.o 00:06:40.275 CXX test/cpp_headers/fd.o 00:06:40.275 LINK mkfs 00:06:40.275 CC examples/nvme/arbitration/arbitration.o 00:06:40.534 LINK fused_ordering 00:06:40.534 CC test/bdev/bdevio/bdevio.o 00:06:40.534 CXX test/cpp_headers/file.o 00:06:40.534 LINK nvme_compliance 00:06:40.534 CC examples/nvme/hotplug/hotplug.o 00:06:40.534 CC test/lvol/esnap/esnap.o 00:06:40.534 CXX test/cpp_headers/fsdev.o 00:06:40.792 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:40.792 LINK arbitration 00:06:40.792 LINK bdevperf 00:06:40.792 LINK hotplug 00:06:40.792 CXX test/cpp_headers/fsdev_module.o 00:06:40.792 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:40.792 CC test/nvme/fdp/fdp.o 00:06:40.792 LINK bdevio 00:06:40.792 LINK doorbell_aers 00:06:41.050 CXX test/cpp_headers/ftl.o 00:06:41.050 CXX test/cpp_headers/fuse_dispatcher.o 00:06:41.050 LINK cmb_copy 00:06:41.050 CXX test/cpp_headers/gpt_spec.o 00:06:41.050 CXX test/cpp_headers/hexlify.o 00:06:41.050 LINK fdp 00:06:41.050 CC examples/nvme/abort/abort.o 00:06:41.309 CXX test/cpp_headers/histogram_data.o 00:06:41.309 CC test/nvme/cuse/cuse.o 00:06:41.309 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:41.309 CXX test/cpp_headers/idxd.o 00:06:41.309 CXX test/cpp_headers/idxd_spec.o 00:06:41.309 CXX test/cpp_headers/init.o 00:06:41.309 CXX test/cpp_headers/ioat.o 00:06:41.309 CXX test/cpp_headers/ioat_spec.o 00:06:41.568 LINK pmr_persistence 00:06:41.568 CXX test/cpp_headers/iscsi_spec.o 00:06:41.568 CXX test/cpp_headers/json.o 00:06:41.568 CXX test/cpp_headers/jsonrpc.o 00:06:41.568 LINK abort 00:06:41.568 CXX test/cpp_headers/keyring.o 00:06:41.568 CXX test/cpp_headers/keyring_module.o 00:06:41.568 CXX test/cpp_headers/likely.o 00:06:41.568 CXX test/cpp_headers/log.o 00:06:41.568 CXX test/cpp_headers/lvol.o 00:06:41.568 CXX test/cpp_headers/md5.o 00:06:41.568 CXX test/cpp_headers/memory.o 00:06:41.827 CXX test/cpp_headers/mmio.o 00:06:41.827 CXX test/cpp_headers/nbd.o 00:06:41.827 CXX test/cpp_headers/net.o 00:06:41.827 CXX test/cpp_headers/notify.o 00:06:41.827 CXX test/cpp_headers/nvme.o 00:06:41.827 CXX test/cpp_headers/nvme_intel.o 00:06:41.827 CXX test/cpp_headers/nvme_ocssd.o 00:06:41.827 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:41.827 CXX test/cpp_headers/nvme_spec.o 00:06:41.827 CXX test/cpp_headers/nvme_zns.o 00:06:41.827 CXX test/cpp_headers/nvmf_cmd.o 00:06:42.086 CC examples/nvmf/nvmf/nvmf.o 00:06:42.086 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:42.086 CXX test/cpp_headers/nvmf.o 00:06:42.086 CXX test/cpp_headers/nvmf_spec.o 00:06:42.086 CXX test/cpp_headers/nvmf_transport.o 00:06:42.086 CXX test/cpp_headers/opal.o 00:06:42.086 CXX test/cpp_headers/opal_spec.o 00:06:42.086 CXX test/cpp_headers/pci_ids.o 00:06:42.086 CXX test/cpp_headers/pipe.o 00:06:42.086 CXX test/cpp_headers/queue.o 00:06:42.345 CXX test/cpp_headers/reduce.o 00:06:42.345 CXX test/cpp_headers/rpc.o 00:06:42.345 LINK nvmf 00:06:42.345 CXX test/cpp_headers/scheduler.o 00:06:42.345 CXX test/cpp_headers/scsi.o 00:06:42.345 CXX test/cpp_headers/scsi_spec.o 00:06:42.345 CXX test/cpp_headers/sock.o 00:06:42.345 CXX test/cpp_headers/stdinc.o 00:06:42.345 CXX test/cpp_headers/string.o 00:06:42.345 CXX test/cpp_headers/thread.o 00:06:42.345 CXX test/cpp_headers/trace.o 00:06:42.603 CXX test/cpp_headers/trace_parser.o 00:06:42.603 CXX test/cpp_headers/tree.o 00:06:42.603 CXX test/cpp_headers/ublk.o 00:06:42.603 CXX test/cpp_headers/util.o 00:06:42.603 CXX test/cpp_headers/uuid.o 00:06:42.603 CXX test/cpp_headers/version.o 00:06:42.603 LINK cuse 00:06:42.603 CXX test/cpp_headers/vfio_user_pci.o 00:06:42.603 CXX test/cpp_headers/vfio_user_spec.o 00:06:42.603 CXX test/cpp_headers/vhost.o 00:06:42.603 CXX test/cpp_headers/vmd.o 00:06:42.603 CXX test/cpp_headers/xor.o 00:06:42.603 CXX test/cpp_headers/zipf.o 00:06:45.902 LINK esnap 00:06:45.902 00:06:45.902 real 1m31.410s 00:06:45.902 user 7m42.904s 00:06:45.902 sys 2m8.264s 00:06:45.902 13:36:26 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:45.902 ************************************ 00:06:45.902 END TEST make 00:06:45.902 ************************************ 00:06:45.902 13:36:26 make -- common/autotest_common.sh@10 -- $ set +x 00:06:45.902 13:36:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:45.902 13:36:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:45.902 13:36:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:45.902 13:36:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:45.902 13:36:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:45.902 13:36:26 -- pm/common@44 -- $ pid=5248 00:06:45.903 13:36:26 -- pm/common@50 -- $ kill -TERM 5248 00:06:45.903 13:36:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:45.903 13:36:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:45.903 13:36:26 -- pm/common@44 -- $ pid=5249 00:06:45.903 13:36:26 -- pm/common@50 -- $ kill -TERM 5249 00:06:45.903 13:36:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:45.903 13:36:26 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:46.163 13:36:26 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:46.163 13:36:26 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:46.163 13:36:26 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:46.163 13:36:26 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:46.163 13:36:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.163 13:36:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.163 13:36:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.163 13:36:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.163 13:36:26 -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.163 13:36:26 -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.163 13:36:26 -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.163 13:36:26 -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.163 13:36:26 -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.163 13:36:26 -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.163 13:36:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.163 13:36:26 -- scripts/common.sh@344 -- # case "$op" in 00:06:46.163 13:36:26 -- scripts/common.sh@345 -- # : 1 00:06:46.163 13:36:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.163 13:36:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.163 13:36:26 -- scripts/common.sh@365 -- # decimal 1 00:06:46.163 13:36:26 -- scripts/common.sh@353 -- # local d=1 00:06:46.163 13:36:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.163 13:36:26 -- scripts/common.sh@355 -- # echo 1 00:06:46.163 13:36:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.163 13:36:26 -- scripts/common.sh@366 -- # decimal 2 00:06:46.163 13:36:26 -- scripts/common.sh@353 -- # local d=2 00:06:46.163 13:36:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.163 13:36:26 -- scripts/common.sh@355 -- # echo 2 00:06:46.163 13:36:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.163 13:36:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.163 13:36:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.163 13:36:26 -- scripts/common.sh@368 -- # return 0 00:06:46.163 13:36:26 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.163 13:36:26 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.163 --rc genhtml_branch_coverage=1 00:06:46.163 --rc genhtml_function_coverage=1 00:06:46.163 --rc genhtml_legend=1 00:06:46.163 --rc geninfo_all_blocks=1 00:06:46.163 --rc geninfo_unexecuted_blocks=1 00:06:46.163 00:06:46.163 ' 00:06:46.163 13:36:26 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.163 --rc genhtml_branch_coverage=1 00:06:46.163 --rc genhtml_function_coverage=1 00:06:46.163 --rc genhtml_legend=1 00:06:46.163 --rc geninfo_all_blocks=1 00:06:46.163 --rc geninfo_unexecuted_blocks=1 00:06:46.163 00:06:46.163 ' 00:06:46.163 13:36:26 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.163 --rc genhtml_branch_coverage=1 00:06:46.163 --rc genhtml_function_coverage=1 00:06:46.163 --rc genhtml_legend=1 00:06:46.163 --rc geninfo_all_blocks=1 00:06:46.163 --rc geninfo_unexecuted_blocks=1 00:06:46.163 00:06:46.163 ' 00:06:46.163 13:36:26 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.163 --rc genhtml_branch_coverage=1 00:06:46.163 --rc genhtml_function_coverage=1 00:06:46.163 --rc genhtml_legend=1 00:06:46.163 --rc geninfo_all_blocks=1 00:06:46.163 --rc geninfo_unexecuted_blocks=1 00:06:46.163 00:06:46.163 ' 00:06:46.163 13:36:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.163 13:36:26 -- nvmf/common.sh@7 -- # uname -s 00:06:46.163 13:36:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.163 13:36:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.163 13:36:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.163 13:36:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.163 13:36:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.163 13:36:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.163 13:36:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.163 13:36:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.163 13:36:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.163 13:36:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.163 13:36:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:06:46.163 13:36:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:06:46.163 13:36:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.163 13:36:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.163 13:36:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:46.163 13:36:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.163 13:36:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.163 13:36:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.163 13:36:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.163 13:36:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.163 13:36:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.163 13:36:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.163 13:36:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.163 13:36:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.163 13:36:27 -- paths/export.sh@5 -- # export PATH 00:06:46.163 13:36:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.163 13:36:27 -- nvmf/common.sh@51 -- # : 0 00:06:46.163 13:36:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.163 13:36:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.163 13:36:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.163 13:36:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.163 13:36:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.163 13:36:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.163 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.163 13:36:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.163 13:36:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.163 13:36:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.163 13:36:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:46.163 13:36:27 -- spdk/autotest.sh@32 -- # uname -s 00:06:46.163 13:36:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:46.163 13:36:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:46.163 13:36:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:46.163 13:36:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:46.163 13:36:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:46.163 13:36:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:46.422 13:36:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:46.423 13:36:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:46.423 13:36:27 -- spdk/autotest.sh@48 -- # udevadm_pid=56142 00:06:46.423 13:36:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:46.423 13:36:27 -- pm/common@17 -- # local monitor 00:06:46.423 13:36:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:46.423 13:36:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:46.423 13:36:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:46.423 13:36:27 -- pm/common@25 -- # sleep 1 00:06:46.423 13:36:27 -- pm/common@21 -- # date +%s 00:06:46.423 13:36:27 -- pm/common@21 -- # date +%s 00:06:46.423 13:36:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730900187 00:06:46.423 13:36:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730900187 00:06:46.423 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730900187_collect-cpu-load.pm.log 00:06:46.423 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730900187_collect-vmstat.pm.log 00:06:47.365 13:36:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:47.365 13:36:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:47.365 13:36:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.365 13:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 13:36:28 -- spdk/autotest.sh@59 -- # create_test_list 00:06:47.365 13:36:28 -- common/autotest_common.sh@750 -- # xtrace_disable 00:06:47.365 13:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:47.365 13:36:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:47.365 13:36:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:47.365 13:36:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:47.365 13:36:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:47.365 13:36:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:47.365 13:36:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:47.365 13:36:28 -- common/autotest_common.sh@1455 -- # uname 00:06:47.366 13:36:28 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:47.366 13:36:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:47.366 13:36:28 -- common/autotest_common.sh@1475 -- # uname 00:06:47.366 13:36:28 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:47.366 13:36:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:47.366 13:36:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:47.366 lcov: LCOV version 1.15 00:06:47.366 13:36:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:05.485 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:05.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:20.367 13:37:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:20.367 13:37:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.367 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:07:20.367 13:37:00 -- spdk/autotest.sh@78 -- # rm -f 00:07:20.367 13:37:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:20.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:20.367 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:20.367 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:20.367 13:37:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:20.367 13:37:00 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:20.367 13:37:00 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:20.367 13:37:00 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:20.367 13:37:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.367 13:37:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:20.367 13:37:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:20.367 13:37:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:20.367 13:37:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.367 13:37:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.367 13:37:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:20.367 13:37:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:20.367 13:37:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:20.367 13:37:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.367 13:37:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.367 13:37:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:07:20.367 13:37:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:07:20.367 13:37:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:20.367 13:37:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.367 13:37:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.367 13:37:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:07:20.367 13:37:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:07:20.367 13:37:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:20.367 13:37:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.367 13:37:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:20.367 13:37:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:20.367 13:37:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:20.367 13:37:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:20.367 13:37:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:20.367 13:37:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:20.367 No valid GPT data, bailing 00:07:20.367 13:37:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:20.367 13:37:01 -- scripts/common.sh@394 -- # pt= 00:07:20.367 13:37:01 -- scripts/common.sh@395 -- # return 1 00:07:20.367 13:37:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:20.367 1+0 records in 00:07:20.367 1+0 records out 00:07:20.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458377 s, 229 MB/s 00:07:20.367 13:37:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:20.367 13:37:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:20.367 13:37:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:20.367 13:37:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:20.367 13:37:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:20.367 No valid GPT data, bailing 00:07:20.367 13:37:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:20.368 13:37:01 -- scripts/common.sh@394 -- # pt= 00:07:20.368 13:37:01 -- scripts/common.sh@395 -- # return 1 00:07:20.368 13:37:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:20.368 1+0 records in 00:07:20.368 1+0 records out 00:07:20.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591862 s, 177 MB/s 00:07:20.368 13:37:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:20.368 13:37:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:20.368 13:37:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:20.368 13:37:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:20.368 13:37:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:20.368 No valid GPT data, bailing 00:07:20.368 13:37:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:20.368 13:37:01 -- scripts/common.sh@394 -- # pt= 00:07:20.368 13:37:01 -- scripts/common.sh@395 -- # return 1 00:07:20.368 13:37:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:20.368 1+0 records in 00:07:20.368 1+0 records out 00:07:20.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650886 s, 161 MB/s 00:07:20.368 13:37:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:20.368 13:37:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:20.368 13:37:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:20.368 13:37:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:20.368 13:37:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:20.368 No valid GPT data, bailing 00:07:20.368 13:37:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:20.368 13:37:01 -- scripts/common.sh@394 -- # pt= 00:07:20.368 13:37:01 -- scripts/common.sh@395 -- # return 1 00:07:20.368 13:37:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:20.368 1+0 records in 00:07:20.368 1+0 records out 00:07:20.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647972 s, 162 MB/s 00:07:20.368 13:37:01 -- spdk/autotest.sh@105 -- # sync 00:07:20.626 13:37:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:20.626 13:37:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:20.626 13:37:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:23.914 13:37:04 -- spdk/autotest.sh@111 -- # uname -s 00:07:23.914 13:37:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:23.914 13:37:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:23.914 13:37:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:24.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:24.174 Hugepages 00:07:24.174 node hugesize free / total 00:07:24.174 node0 1048576kB 0 / 0 00:07:24.174 node0 2048kB 0 / 0 00:07:24.174 00:07:24.174 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:24.174 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:24.432 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:24.433 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:24.433 13:37:05 -- spdk/autotest.sh@117 -- # uname -s 00:07:24.433 13:37:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:24.433 13:37:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:24.433 13:37:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:25.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:25.369 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:25.369 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:25.630 13:37:06 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:26.587 13:37:07 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:26.587 13:37:07 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:26.587 13:37:07 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:26.587 13:37:07 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:26.587 13:37:07 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:26.587 13:37:07 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:26.587 13:37:07 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:26.587 13:37:07 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:26.587 13:37:07 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:26.587 13:37:07 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:07:26.587 13:37:07 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:26.587 13:37:07 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:27.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.155 Waiting for block devices as requested 00:07:27.155 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.414 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.414 13:37:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:27.414 13:37:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:27.414 13:37:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:27.414 13:37:08 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:07:27.414 13:37:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:27.414 13:37:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:27.414 13:37:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:27.414 13:37:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:07:27.415 13:37:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:07:27.415 13:37:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:07:27.415 13:37:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:07:27.415 13:37:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:27.415 13:37:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:27.415 13:37:08 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:07:27.415 13:37:08 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:27.415 13:37:08 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:27.415 13:37:08 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:27.415 13:37:08 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:07:27.415 13:37:08 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:27.415 13:37:08 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:27.415 13:37:08 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:27.415 13:37:08 -- common/autotest_common.sh@1541 -- # continue 00:07:27.415 13:37:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:27.415 13:37:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:27.415 13:37:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:27.415 13:37:08 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:07:27.415 13:37:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:27.415 13:37:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:27.415 13:37:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:27.415 13:37:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:27.415 13:37:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:27.415 13:37:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:27.675 13:37:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:27.675 13:37:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:27.676 13:37:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:27.676 13:37:08 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:07:27.676 13:37:08 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:27.676 13:37:08 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:27.676 13:37:08 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:27.676 13:37:08 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:27.676 13:37:08 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:27.676 13:37:08 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:27.676 13:37:08 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:27.676 13:37:08 -- common/autotest_common.sh@1541 -- # continue 00:07:27.676 13:37:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:27.676 13:37:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.676 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.676 13:37:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:27.676 13:37:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.676 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.676 13:37:08 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:28.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:28.613 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:28.613 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:28.613 13:37:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:28.613 13:37:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.613 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:07:28.613 13:37:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:28.613 13:37:09 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:28.613 13:37:09 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:28.613 13:37:09 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:28.613 13:37:09 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:28.613 13:37:09 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:28.613 13:37:09 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:28.613 13:37:09 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:28.613 13:37:09 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:28.613 13:37:09 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:28.613 13:37:09 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:28.613 13:37:09 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:28.613 13:37:09 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:28.872 13:37:09 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:07:28.872 13:37:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:28.872 13:37:09 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:28.872 13:37:09 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:28.872 13:37:09 -- common/autotest_common.sh@1564 -- # device=0x0010 00:07:28.872 13:37:09 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:28.872 13:37:09 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:28.872 13:37:09 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:28.872 13:37:09 -- common/autotest_common.sh@1564 -- # device=0x0010 00:07:28.872 13:37:09 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:28.872 13:37:09 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:07:28.872 13:37:09 -- common/autotest_common.sh@1570 -- # return 0 00:07:28.872 13:37:09 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:07:28.872 13:37:09 -- common/autotest_common.sh@1578 -- # return 0 00:07:28.872 13:37:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:28.872 13:37:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:28.872 13:37:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:28.872 13:37:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:28.872 13:37:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:28.872 13:37:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.872 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:07:28.872 13:37:09 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:28.872 13:37:09 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:28.872 13:37:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:28.872 13:37:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.872 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:07:28.872 ************************************ 00:07:28.872 START TEST env 00:07:28.872 ************************************ 00:07:28.872 13:37:09 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:28.872 * Looking for test storage... 00:07:29.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1691 -- # lcov --version 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:29.131 13:37:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.131 13:37:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.131 13:37:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.131 13:37:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.131 13:37:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.131 13:37:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.131 13:37:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.131 13:37:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.131 13:37:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.131 13:37:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.131 13:37:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.131 13:37:09 env -- scripts/common.sh@344 -- # case "$op" in 00:07:29.131 13:37:09 env -- scripts/common.sh@345 -- # : 1 00:07:29.131 13:37:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.131 13:37:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.131 13:37:09 env -- scripts/common.sh@365 -- # decimal 1 00:07:29.131 13:37:09 env -- scripts/common.sh@353 -- # local d=1 00:07:29.131 13:37:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.131 13:37:09 env -- scripts/common.sh@355 -- # echo 1 00:07:29.131 13:37:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.131 13:37:09 env -- scripts/common.sh@366 -- # decimal 2 00:07:29.131 13:37:09 env -- scripts/common.sh@353 -- # local d=2 00:07:29.131 13:37:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.131 13:37:09 env -- scripts/common.sh@355 -- # echo 2 00:07:29.131 13:37:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.131 13:37:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.131 13:37:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.131 13:37:09 env -- scripts/common.sh@368 -- # return 0 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:29.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.131 --rc genhtml_branch_coverage=1 00:07:29.131 --rc genhtml_function_coverage=1 00:07:29.131 --rc genhtml_legend=1 00:07:29.131 --rc geninfo_all_blocks=1 00:07:29.131 --rc geninfo_unexecuted_blocks=1 00:07:29.131 00:07:29.131 ' 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:29.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.131 --rc genhtml_branch_coverage=1 00:07:29.131 --rc genhtml_function_coverage=1 00:07:29.131 --rc genhtml_legend=1 00:07:29.131 --rc geninfo_all_blocks=1 00:07:29.131 --rc geninfo_unexecuted_blocks=1 00:07:29.131 00:07:29.131 ' 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:29.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.131 --rc genhtml_branch_coverage=1 00:07:29.131 --rc genhtml_function_coverage=1 00:07:29.131 --rc genhtml_legend=1 00:07:29.131 --rc geninfo_all_blocks=1 00:07:29.131 --rc geninfo_unexecuted_blocks=1 00:07:29.131 00:07:29.131 ' 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:29.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.131 --rc genhtml_branch_coverage=1 00:07:29.131 --rc genhtml_function_coverage=1 00:07:29.131 --rc genhtml_legend=1 00:07:29.131 --rc geninfo_all_blocks=1 00:07:29.131 --rc geninfo_unexecuted_blocks=1 00:07:29.131 00:07:29.131 ' 00:07:29.131 13:37:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:29.131 13:37:09 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.131 13:37:09 env -- common/autotest_common.sh@10 -- # set +x 00:07:29.131 ************************************ 00:07:29.131 START TEST env_memory 00:07:29.131 ************************************ 00:07:29.131 13:37:09 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:29.131 00:07:29.131 00:07:29.131 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.131 http://cunit.sourceforge.net/ 00:07:29.131 00:07:29.131 00:07:29.131 Suite: memory 00:07:29.131 Test: alloc and free memory map ...[2024-11-06 13:37:09.970422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:29.131 passed 00:07:29.131 Test: mem map translation ...[2024-11-06 13:37:09.991798] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:29.131 [2024-11-06 13:37:09.991829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:29.131 [2024-11-06 13:37:09.991874] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:29.131 [2024-11-06 13:37:09.991884] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:29.131 passed 00:07:29.131 Test: mem map registration ...[2024-11-06 13:37:10.030384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:29.131 [2024-11-06 13:37:10.030422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:29.131 passed 00:07:29.391 Test: mem map adjacent registrations ...passed 00:07:29.391 00:07:29.391 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.391 suites 1 1 n/a 0 0 00:07:29.391 tests 4 4 4 0 0 00:07:29.391 asserts 152 152 152 0 n/a 00:07:29.391 00:07:29.391 Elapsed time = 0.139 seconds 00:07:29.391 00:07:29.391 real 0m0.164s 00:07:29.391 user 0m0.145s 00:07:29.391 sys 0m0.016s 00:07:29.391 13:37:10 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.391 13:37:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:29.391 ************************************ 00:07:29.391 END TEST env_memory 00:07:29.391 ************************************ 00:07:29.391 13:37:10 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:29.391 13:37:10 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:29.391 13:37:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.391 13:37:10 env -- common/autotest_common.sh@10 -- # set +x 00:07:29.391 ************************************ 00:07:29.391 START TEST env_vtophys 00:07:29.391 ************************************ 00:07:29.391 13:37:10 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:29.391 EAL: lib.eal log level changed from notice to debug 00:07:29.391 EAL: Detected lcore 0 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 1 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 2 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 3 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 4 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 5 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 6 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 7 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 8 as core 0 on socket 0 00:07:29.391 EAL: Detected lcore 9 as core 0 on socket 0 00:07:29.391 EAL: Maximum logical cores by configuration: 128 00:07:29.391 EAL: Detected CPU lcores: 10 00:07:29.391 EAL: Detected NUMA nodes: 1 00:07:29.391 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:29.391 EAL: Detected shared linkage of DPDK 00:07:29.391 EAL: No shared files mode enabled, IPC will be disabled 00:07:29.391 EAL: Selected IOVA mode 'PA' 00:07:29.391 EAL: Probing VFIO support... 00:07:29.391 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:29.391 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:29.391 EAL: Ask a virtual area of 0x2e000 bytes 00:07:29.391 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:29.391 EAL: Setting up physically contiguous memory... 00:07:29.391 EAL: Setting maximum number of open files to 524288 00:07:29.391 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:29.391 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:29.391 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.391 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:29.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.391 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.391 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:29.391 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:29.391 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.391 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:29.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.391 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.391 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:29.391 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:29.391 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.391 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:29.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.391 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.391 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:29.391 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:29.391 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.391 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:29.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.391 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.391 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:29.391 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:29.391 EAL: Hugepages will be freed exactly as allocated. 00:07:29.391 EAL: No shared files mode enabled, IPC is disabled 00:07:29.391 EAL: No shared files mode enabled, IPC is disabled 00:07:29.391 EAL: TSC frequency is ~2490000 KHz 00:07:29.391 EAL: Main lcore 0 is ready (tid=7fd9c52e2a00;cpuset=[0]) 00:07:29.391 EAL: Trying to obtain current memory policy. 00:07:29.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.391 EAL: Restoring previous memory policy: 0 00:07:29.391 EAL: request: mp_malloc_sync 00:07:29.391 EAL: No shared files mode enabled, IPC is disabled 00:07:29.391 EAL: Heap on socket 0 was expanded by 2MB 00:07:29.391 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:29.391 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:29.391 EAL: Mem event callback 'spdk:(nil)' registered 00:07:29.391 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:29.651 00:07:29.651 00:07:29.651 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.651 http://cunit.sourceforge.net/ 00:07:29.651 00:07:29.651 00:07:29.651 Suite: components_suite 00:07:29.651 Test: vtophys_malloc_test ...passed 00:07:29.651 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:29.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.651 EAL: Restoring previous memory policy: 4 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was expanded by 4MB 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was shrunk by 4MB 00:07:29.651 EAL: Trying to obtain current memory policy. 00:07:29.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.651 EAL: Restoring previous memory policy: 4 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was expanded by 6MB 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was shrunk by 6MB 00:07:29.651 EAL: Trying to obtain current memory policy. 00:07:29.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.651 EAL: Restoring previous memory policy: 4 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was expanded by 10MB 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was shrunk by 10MB 00:07:29.651 EAL: Trying to obtain current memory policy. 00:07:29.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.651 EAL: Restoring previous memory policy: 4 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was expanded by 18MB 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was shrunk by 18MB 00:07:29.651 EAL: Trying to obtain current memory policy. 00:07:29.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.651 EAL: Restoring previous memory policy: 4 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was expanded by 34MB 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was shrunk by 34MB 00:07:29.651 EAL: Trying to obtain current memory policy. 00:07:29.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.651 EAL: Restoring previous memory policy: 4 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was expanded by 66MB 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was shrunk by 66MB 00:07:29.651 EAL: Trying to obtain current memory policy. 00:07:29.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.651 EAL: Restoring previous memory policy: 4 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was expanded by 130MB 00:07:29.651 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.651 EAL: request: mp_malloc_sync 00:07:29.651 EAL: No shared files mode enabled, IPC is disabled 00:07:29.651 EAL: Heap on socket 0 was shrunk by 130MB 00:07:29.651 EAL: Trying to obtain current memory policy. 00:07:29.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.952 EAL: Restoring previous memory policy: 4 00:07:29.952 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.952 EAL: request: mp_malloc_sync 00:07:29.952 EAL: No shared files mode enabled, IPC is disabled 00:07:29.952 EAL: Heap on socket 0 was expanded by 258MB 00:07:29.952 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.952 EAL: request: mp_malloc_sync 00:07:29.952 EAL: No shared files mode enabled, IPC is disabled 00:07:29.952 EAL: Heap on socket 0 was shrunk by 258MB 00:07:29.952 EAL: Trying to obtain current memory policy. 00:07:29.952 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.210 EAL: Restoring previous memory policy: 4 00:07:30.210 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.210 EAL: request: mp_malloc_sync 00:07:30.210 EAL: No shared files mode enabled, IPC is disabled 00:07:30.210 EAL: Heap on socket 0 was expanded by 514MB 00:07:30.210 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.469 EAL: request: mp_malloc_sync 00:07:30.469 EAL: No shared files mode enabled, IPC is disabled 00:07:30.469 EAL: Heap on socket 0 was shrunk by 514MB 00:07:30.469 EAL: Trying to obtain current memory policy. 00:07:30.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.736 EAL: Restoring previous memory policy: 4 00:07:30.736 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.736 EAL: request: mp_malloc_sync 00:07:30.736 EAL: No shared files mode enabled, IPC is disabled 00:07:30.736 EAL: Heap on socket 0 was expanded by 1026MB 00:07:30.997 EAL: Calling mem event callback 'spdk:(nil)' 00:07:31.256 passed 00:07:31.256 00:07:31.256 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.256 suites 1 1 n/a 0 0 00:07:31.256 tests 2 2 2 0 0 00:07:31.256 asserts 5631 5631 5631 0 n/a 00:07:31.256 00:07:31.256 Elapsed time = 1.824 seconds 00:07:31.256 EAL: request: mp_malloc_sync 00:07:31.256 EAL: No shared files mode enabled, IPC is disabled 00:07:31.256 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:31.515 EAL: Calling mem event callback 'spdk:(nil)' 00:07:31.515 EAL: request: mp_malloc_sync 00:07:31.515 EAL: No shared files mode enabled, IPC is disabled 00:07:31.515 EAL: Heap on socket 0 was shrunk by 2MB 00:07:31.515 EAL: No shared files mode enabled, IPC is disabled 00:07:31.515 EAL: No shared files mode enabled, IPC is disabled 00:07:31.515 EAL: No shared files mode enabled, IPC is disabled 00:07:31.515 00:07:31.515 real 0m2.043s 00:07:31.515 user 0m1.174s 00:07:31.515 sys 0m0.737s 00:07:31.515 13:37:12 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.515 13:37:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:31.515 ************************************ 00:07:31.515 END TEST env_vtophys 00:07:31.515 ************************************ 00:07:31.515 13:37:12 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:31.515 13:37:12 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.515 13:37:12 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.515 13:37:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:31.515 ************************************ 00:07:31.515 START TEST env_pci 00:07:31.515 ************************************ 00:07:31.515 13:37:12 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:31.515 00:07:31.515 00:07:31.515 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.515 http://cunit.sourceforge.net/ 00:07:31.515 00:07:31.515 00:07:31.515 Suite: pci 00:07:31.515 Test: pci_hook ...[2024-11-06 13:37:12.289137] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58401 has claimed it 00:07:31.515 passed 00:07:31.515 00:07:31.515 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.515 suites 1 1 n/a 0 0 00:07:31.515 tests 1 1 1 0 0 00:07:31.515 asserts 25 25 25 0 n/a 00:07:31.515 00:07:31.515 Elapsed time = 0.003 seconds 00:07:31.515 EAL: Cannot find device (10000:00:01.0) 00:07:31.515 EAL: Failed to attach device on primary process 00:07:31.515 00:07:31.515 real 0m0.035s 00:07:31.515 user 0m0.020s 00:07:31.515 sys 0m0.013s 00:07:31.515 13:37:12 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.515 13:37:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:31.515 ************************************ 00:07:31.515 END TEST env_pci 00:07:31.515 ************************************ 00:07:31.515 13:37:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:31.515 13:37:12 env -- env/env.sh@15 -- # uname 00:07:31.515 13:37:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:31.515 13:37:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:31.515 13:37:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:31.515 13:37:12 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:31.515 13:37:12 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.515 13:37:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:31.515 ************************************ 00:07:31.515 START TEST env_dpdk_post_init 00:07:31.515 ************************************ 00:07:31.515 13:37:12 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:31.515 EAL: Detected CPU lcores: 10 00:07:31.515 EAL: Detected NUMA nodes: 1 00:07:31.515 EAL: Detected shared linkage of DPDK 00:07:31.515 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:31.515 EAL: Selected IOVA mode 'PA' 00:07:31.776 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:31.776 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:31.776 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:31.776 Starting DPDK initialization... 00:07:31.776 Starting SPDK post initialization... 00:07:31.776 SPDK NVMe probe 00:07:31.776 Attaching to 0000:00:10.0 00:07:31.776 Attaching to 0000:00:11.0 00:07:31.776 Attached to 0000:00:10.0 00:07:31.776 Attached to 0000:00:11.0 00:07:31.776 Cleaning up... 00:07:31.776 00:07:31.776 real 0m0.207s 00:07:31.776 user 0m0.063s 00:07:31.776 sys 0m0.045s 00:07:31.776 ************************************ 00:07:31.776 END TEST env_dpdk_post_init 00:07:31.776 ************************************ 00:07:31.776 13:37:12 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.776 13:37:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:31.776 13:37:12 env -- env/env.sh@26 -- # uname 00:07:31.776 13:37:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:31.776 13:37:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:31.776 13:37:12 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.776 13:37:12 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.776 13:37:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:31.776 ************************************ 00:07:31.776 START TEST env_mem_callbacks 00:07:31.776 ************************************ 00:07:31.776 13:37:12 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:31.776 EAL: Detected CPU lcores: 10 00:07:31.776 EAL: Detected NUMA nodes: 1 00:07:31.776 EAL: Detected shared linkage of DPDK 00:07:31.776 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:31.776 EAL: Selected IOVA mode 'PA' 00:07:32.035 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:32.035 00:07:32.035 00:07:32.035 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.035 http://cunit.sourceforge.net/ 00:07:32.035 00:07:32.035 00:07:32.035 Suite: memory 00:07:32.035 Test: test ... 00:07:32.035 register 0x200000200000 2097152 00:07:32.035 malloc 3145728 00:07:32.035 register 0x200000400000 4194304 00:07:32.035 buf 0x200000500000 len 3145728 PASSED 00:07:32.035 malloc 64 00:07:32.035 buf 0x2000004fff40 len 64 PASSED 00:07:32.035 malloc 4194304 00:07:32.035 register 0x200000800000 6291456 00:07:32.035 buf 0x200000a00000 len 4194304 PASSED 00:07:32.035 free 0x200000500000 3145728 00:07:32.035 free 0x2000004fff40 64 00:07:32.035 unregister 0x200000400000 4194304 PASSED 00:07:32.035 free 0x200000a00000 4194304 00:07:32.035 unregister 0x200000800000 6291456 PASSED 00:07:32.035 malloc 8388608 00:07:32.035 register 0x200000400000 10485760 00:07:32.035 buf 0x200000600000 len 8388608 PASSED 00:07:32.035 free 0x200000600000 8388608 00:07:32.035 unregister 0x200000400000 10485760 PASSED 00:07:32.035 passed 00:07:32.035 00:07:32.035 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.035 suites 1 1 n/a 0 0 00:07:32.035 tests 1 1 1 0 0 00:07:32.035 asserts 15 15 15 0 n/a 00:07:32.035 00:07:32.035 Elapsed time = 0.010 seconds 00:07:32.035 00:07:32.035 real 0m0.160s 00:07:32.035 user 0m0.024s 00:07:32.035 sys 0m0.034s 00:07:32.035 13:37:12 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.035 13:37:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:32.035 ************************************ 00:07:32.035 END TEST env_mem_callbacks 00:07:32.035 ************************************ 00:07:32.035 ************************************ 00:07:32.035 END TEST env 00:07:32.035 ************************************ 00:07:32.035 00:07:32.035 real 0m3.205s 00:07:32.035 user 0m1.673s 00:07:32.035 sys 0m1.209s 00:07:32.035 13:37:12 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.035 13:37:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:32.035 13:37:12 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:32.035 13:37:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.035 13:37:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.035 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:07:32.035 ************************************ 00:07:32.035 START TEST rpc 00:07:32.036 ************************************ 00:07:32.036 13:37:12 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:32.295 * Looking for test storage... 00:07:32.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:32.295 13:37:13 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.295 13:37:13 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.295 13:37:13 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.295 13:37:13 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.295 13:37:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.295 13:37:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.295 13:37:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.295 13:37:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.295 13:37:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.295 13:37:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.295 13:37:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.295 13:37:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.295 13:37:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.295 13:37:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.295 13:37:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.295 13:37:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:32.295 13:37:13 rpc -- scripts/common.sh@345 -- # : 1 00:07:32.295 13:37:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.295 13:37:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.295 13:37:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:32.295 13:37:13 rpc -- scripts/common.sh@353 -- # local d=1 00:07:32.295 13:37:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.295 13:37:13 rpc -- scripts/common.sh@355 -- # echo 1 00:07:32.295 13:37:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.295 13:37:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:32.295 13:37:13 rpc -- scripts/common.sh@353 -- # local d=2 00:07:32.295 13:37:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.295 13:37:13 rpc -- scripts/common.sh@355 -- # echo 2 00:07:32.295 13:37:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.295 13:37:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.295 13:37:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.295 13:37:13 rpc -- scripts/common.sh@368 -- # return 0 00:07:32.295 13:37:13 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.295 13:37:13 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.295 --rc genhtml_branch_coverage=1 00:07:32.295 --rc genhtml_function_coverage=1 00:07:32.295 --rc genhtml_legend=1 00:07:32.295 --rc geninfo_all_blocks=1 00:07:32.295 --rc geninfo_unexecuted_blocks=1 00:07:32.295 00:07:32.296 ' 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.296 --rc genhtml_branch_coverage=1 00:07:32.296 --rc genhtml_function_coverage=1 00:07:32.296 --rc genhtml_legend=1 00:07:32.296 --rc geninfo_all_blocks=1 00:07:32.296 --rc geninfo_unexecuted_blocks=1 00:07:32.296 00:07:32.296 ' 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.296 --rc genhtml_branch_coverage=1 00:07:32.296 --rc genhtml_function_coverage=1 00:07:32.296 --rc genhtml_legend=1 00:07:32.296 --rc geninfo_all_blocks=1 00:07:32.296 --rc geninfo_unexecuted_blocks=1 00:07:32.296 00:07:32.296 ' 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.296 --rc genhtml_branch_coverage=1 00:07:32.296 --rc genhtml_function_coverage=1 00:07:32.296 --rc genhtml_legend=1 00:07:32.296 --rc geninfo_all_blocks=1 00:07:32.296 --rc geninfo_unexecuted_blocks=1 00:07:32.296 00:07:32.296 ' 00:07:32.296 13:37:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58524 00:07:32.296 13:37:13 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:32.296 13:37:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.296 13:37:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58524 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@833 -- # '[' -z 58524 ']' 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.296 13:37:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.555 [2024-11-06 13:37:13.237032] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:07:32.555 [2024-11-06 13:37:13.237360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58524 ] 00:07:32.555 [2024-11-06 13:37:13.387985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.555 [2024-11-06 13:37:13.464791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:32.555 [2024-11-06 13:37:13.465048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58524' to capture a snapshot of events at runtime. 00:07:32.555 [2024-11-06 13:37:13.465140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.555 [2024-11-06 13:37:13.465189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.555 [2024-11-06 13:37:13.465215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58524 for offline analysis/debug. 00:07:32.555 [2024-11-06 13:37:13.465680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.493 13:37:14 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.493 13:37:14 rpc -- common/autotest_common.sh@866 -- # return 0 00:07:33.493 13:37:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:33.493 13:37:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:33.493 13:37:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:33.493 13:37:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:33.493 13:37:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.493 13:37:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.493 13:37:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.493 ************************************ 00:07:33.493 START TEST rpc_integrity 00:07:33.493 ************************************ 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:33.493 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.493 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:33.493 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:33.493 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:33.493 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.493 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:33.493 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.493 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.493 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:33.493 { 00:07:33.493 "aliases": [ 00:07:33.493 "0fe61a6f-d663-425a-959a-2381fea057f8" 00:07:33.493 ], 00:07:33.493 "assigned_rate_limits": { 00:07:33.493 "r_mbytes_per_sec": 0, 00:07:33.493 "rw_ios_per_sec": 0, 00:07:33.493 "rw_mbytes_per_sec": 0, 00:07:33.493 "w_mbytes_per_sec": 0 00:07:33.493 }, 00:07:33.493 "block_size": 512, 00:07:33.493 "claimed": false, 00:07:33.493 "driver_specific": {}, 00:07:33.493 "memory_domains": [ 00:07:33.493 { 00:07:33.493 "dma_device_id": "system", 00:07:33.493 "dma_device_type": 1 00:07:33.493 }, 00:07:33.493 { 00:07:33.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.493 "dma_device_type": 2 00:07:33.493 } 00:07:33.493 ], 00:07:33.493 "name": "Malloc0", 00:07:33.493 "num_blocks": 16384, 00:07:33.493 "product_name": "Malloc disk", 00:07:33.493 "supported_io_types": { 00:07:33.493 "abort": true, 00:07:33.493 "compare": false, 00:07:33.493 "compare_and_write": false, 00:07:33.493 "copy": true, 00:07:33.493 "flush": true, 00:07:33.493 "get_zone_info": false, 00:07:33.494 "nvme_admin": false, 00:07:33.494 "nvme_io": false, 00:07:33.494 "nvme_io_md": false, 00:07:33.494 "nvme_iov_md": false, 00:07:33.494 "read": true, 00:07:33.494 "reset": true, 00:07:33.494 "seek_data": false, 00:07:33.494 "seek_hole": false, 00:07:33.494 "unmap": true, 00:07:33.494 "write": true, 00:07:33.494 "write_zeroes": true, 00:07:33.494 "zcopy": true, 00:07:33.494 "zone_append": false, 00:07:33.494 "zone_management": false 00:07:33.494 }, 00:07:33.494 "uuid": "0fe61a6f-d663-425a-959a-2381fea057f8", 00:07:33.494 "zoned": false 00:07:33.494 } 00:07:33.494 ]' 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.494 [2024-11-06 13:37:14.323518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:33.494 [2024-11-06 13:37:14.323570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.494 [2024-11-06 13:37:14.323594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf23c20 00:07:33.494 [2024-11-06 13:37:14.323606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.494 [2024-11-06 13:37:14.325196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.494 [2024-11-06 13:37:14.325235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:33.494 Passthru0 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:33.494 { 00:07:33.494 "aliases": [ 00:07:33.494 "0fe61a6f-d663-425a-959a-2381fea057f8" 00:07:33.494 ], 00:07:33.494 "assigned_rate_limits": { 00:07:33.494 "r_mbytes_per_sec": 0, 00:07:33.494 "rw_ios_per_sec": 0, 00:07:33.494 "rw_mbytes_per_sec": 0, 00:07:33.494 "w_mbytes_per_sec": 0 00:07:33.494 }, 00:07:33.494 "block_size": 512, 00:07:33.494 "claim_type": "exclusive_write", 00:07:33.494 "claimed": true, 00:07:33.494 "driver_specific": {}, 00:07:33.494 "memory_domains": [ 00:07:33.494 { 00:07:33.494 "dma_device_id": "system", 00:07:33.494 "dma_device_type": 1 00:07:33.494 }, 00:07:33.494 { 00:07:33.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.494 "dma_device_type": 2 00:07:33.494 } 00:07:33.494 ], 00:07:33.494 "name": "Malloc0", 00:07:33.494 "num_blocks": 16384, 00:07:33.494 "product_name": "Malloc disk", 00:07:33.494 "supported_io_types": { 00:07:33.494 "abort": true, 00:07:33.494 "compare": false, 00:07:33.494 "compare_and_write": false, 00:07:33.494 "copy": true, 00:07:33.494 "flush": true, 00:07:33.494 "get_zone_info": false, 00:07:33.494 "nvme_admin": false, 00:07:33.494 "nvme_io": false, 00:07:33.494 "nvme_io_md": false, 00:07:33.494 "nvme_iov_md": false, 00:07:33.494 "read": true, 00:07:33.494 "reset": true, 00:07:33.494 "seek_data": false, 00:07:33.494 "seek_hole": false, 00:07:33.494 "unmap": true, 00:07:33.494 "write": true, 00:07:33.494 "write_zeroes": true, 00:07:33.494 "zcopy": true, 00:07:33.494 "zone_append": false, 00:07:33.494 "zone_management": false 00:07:33.494 }, 00:07:33.494 "uuid": "0fe61a6f-d663-425a-959a-2381fea057f8", 00:07:33.494 "zoned": false 00:07:33.494 }, 00:07:33.494 { 00:07:33.494 "aliases": [ 00:07:33.494 "0e9a1ab0-f915-5159-bfc0-04d66cb4efb4" 00:07:33.494 ], 00:07:33.494 "assigned_rate_limits": { 00:07:33.494 "r_mbytes_per_sec": 0, 00:07:33.494 "rw_ios_per_sec": 0, 00:07:33.494 "rw_mbytes_per_sec": 0, 00:07:33.494 "w_mbytes_per_sec": 0 00:07:33.494 }, 00:07:33.494 "block_size": 512, 00:07:33.494 "claimed": false, 00:07:33.494 "driver_specific": { 00:07:33.494 "passthru": { 00:07:33.494 "base_bdev_name": "Malloc0", 00:07:33.494 "name": "Passthru0" 00:07:33.494 } 00:07:33.494 }, 00:07:33.494 "memory_domains": [ 00:07:33.494 { 00:07:33.494 "dma_device_id": "system", 00:07:33.494 "dma_device_type": 1 00:07:33.494 }, 00:07:33.494 { 00:07:33.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.494 "dma_device_type": 2 00:07:33.494 } 00:07:33.494 ], 00:07:33.494 "name": "Passthru0", 00:07:33.494 "num_blocks": 16384, 00:07:33.494 "product_name": "passthru", 00:07:33.494 "supported_io_types": { 00:07:33.494 "abort": true, 00:07:33.494 "compare": false, 00:07:33.494 "compare_and_write": false, 00:07:33.494 "copy": true, 00:07:33.494 "flush": true, 00:07:33.494 "get_zone_info": false, 00:07:33.494 "nvme_admin": false, 00:07:33.494 "nvme_io": false, 00:07:33.494 "nvme_io_md": false, 00:07:33.494 "nvme_iov_md": false, 00:07:33.494 "read": true, 00:07:33.494 "reset": true, 00:07:33.494 "seek_data": false, 00:07:33.494 "seek_hole": false, 00:07:33.494 "unmap": true, 00:07:33.494 "write": true, 00:07:33.494 "write_zeroes": true, 00:07:33.494 "zcopy": true, 00:07:33.494 "zone_append": false, 00:07:33.494 "zone_management": false 00:07:33.494 }, 00:07:33.494 "uuid": "0e9a1ab0-f915-5159-bfc0-04d66cb4efb4", 00:07:33.494 "zoned": false 00:07:33.494 } 00:07:33.494 ]' 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.494 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.494 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.754 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.754 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:33.754 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.754 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.754 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.754 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:33.754 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:33.754 ************************************ 00:07:33.754 END TEST rpc_integrity 00:07:33.754 ************************************ 00:07:33.754 13:37:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:33.754 00:07:33.754 real 0m0.322s 00:07:33.754 user 0m0.181s 00:07:33.754 sys 0m0.062s 00:07:33.754 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.754 13:37:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.754 13:37:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:33.754 13:37:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.754 13:37:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.754 13:37:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.754 ************************************ 00:07:33.754 START TEST rpc_plugins 00:07:33.754 ************************************ 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:33.754 { 00:07:33.754 "aliases": [ 00:07:33.754 "16e393e9-3149-4335-8b94-ede8d186d15e" 00:07:33.754 ], 00:07:33.754 "assigned_rate_limits": { 00:07:33.754 "r_mbytes_per_sec": 0, 00:07:33.754 "rw_ios_per_sec": 0, 00:07:33.754 "rw_mbytes_per_sec": 0, 00:07:33.754 "w_mbytes_per_sec": 0 00:07:33.754 }, 00:07:33.754 "block_size": 4096, 00:07:33.754 "claimed": false, 00:07:33.754 "driver_specific": {}, 00:07:33.754 "memory_domains": [ 00:07:33.754 { 00:07:33.754 "dma_device_id": "system", 00:07:33.754 "dma_device_type": 1 00:07:33.754 }, 00:07:33.754 { 00:07:33.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.754 "dma_device_type": 2 00:07:33.754 } 00:07:33.754 ], 00:07:33.754 "name": "Malloc1", 00:07:33.754 "num_blocks": 256, 00:07:33.754 "product_name": "Malloc disk", 00:07:33.754 "supported_io_types": { 00:07:33.754 "abort": true, 00:07:33.754 "compare": false, 00:07:33.754 "compare_and_write": false, 00:07:33.754 "copy": true, 00:07:33.754 "flush": true, 00:07:33.754 "get_zone_info": false, 00:07:33.754 "nvme_admin": false, 00:07:33.754 "nvme_io": false, 00:07:33.754 "nvme_io_md": false, 00:07:33.754 "nvme_iov_md": false, 00:07:33.754 "read": true, 00:07:33.754 "reset": true, 00:07:33.754 "seek_data": false, 00:07:33.754 "seek_hole": false, 00:07:33.754 "unmap": true, 00:07:33.754 "write": true, 00:07:33.754 "write_zeroes": true, 00:07:33.754 "zcopy": true, 00:07:33.754 "zone_append": false, 00:07:33.754 "zone_management": false 00:07:33.754 }, 00:07:33.754 "uuid": "16e393e9-3149-4335-8b94-ede8d186d15e", 00:07:33.754 "zoned": false 00:07:33.754 } 00:07:33.754 ]' 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.754 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:33.754 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:34.014 ************************************ 00:07:34.014 END TEST rpc_plugins 00:07:34.014 ************************************ 00:07:34.014 13:37:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:34.014 00:07:34.014 real 0m0.154s 00:07:34.014 user 0m0.090s 00:07:34.014 sys 0m0.025s 00:07:34.014 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.014 13:37:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:34.014 13:37:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:34.014 13:37:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.014 13:37:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.014 13:37:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.014 ************************************ 00:07:34.014 START TEST rpc_trace_cmd_test 00:07:34.014 ************************************ 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:34.014 "bdev": { 00:07:34.014 "mask": "0x8", 00:07:34.014 "tpoint_mask": "0xffffffffffffffff" 00:07:34.014 }, 00:07:34.014 "bdev_nvme": { 00:07:34.014 "mask": "0x4000", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "bdev_raid": { 00:07:34.014 "mask": "0x20000", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "blob": { 00:07:34.014 "mask": "0x10000", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "blobfs": { 00:07:34.014 "mask": "0x80", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "dsa": { 00:07:34.014 "mask": "0x200", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "ftl": { 00:07:34.014 "mask": "0x40", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "iaa": { 00:07:34.014 "mask": "0x1000", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "iscsi_conn": { 00:07:34.014 "mask": "0x2", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "nvme_pcie": { 00:07:34.014 "mask": "0x800", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "nvme_tcp": { 00:07:34.014 "mask": "0x2000", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "nvmf_rdma": { 00:07:34.014 "mask": "0x10", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "nvmf_tcp": { 00:07:34.014 "mask": "0x20", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "scheduler": { 00:07:34.014 "mask": "0x40000", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "scsi": { 00:07:34.014 "mask": "0x4", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "sock": { 00:07:34.014 "mask": "0x8000", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "thread": { 00:07:34.014 "mask": "0x400", 00:07:34.014 "tpoint_mask": "0x0" 00:07:34.014 }, 00:07:34.014 "tpoint_group_mask": "0x8", 00:07:34.014 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58524" 00:07:34.014 }' 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:34.014 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:34.273 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:34.273 13:37:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:34.273 ************************************ 00:07:34.273 END TEST rpc_trace_cmd_test 00:07:34.273 ************************************ 00:07:34.273 13:37:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:34.273 00:07:34.273 real 0m0.243s 00:07:34.273 user 0m0.187s 00:07:34.273 sys 0m0.044s 00:07:34.273 13:37:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.273 13:37:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.273 13:37:15 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:07:34.273 13:37:15 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:07:34.273 13:37:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.273 13:37:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.273 13:37:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.273 ************************************ 00:07:34.273 START TEST go_rpc 00:07:34.273 ************************************ 00:07:34.273 13:37:15 rpc.go_rpc -- common/autotest_common.sh@1127 -- # go_rpc 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:07:34.273 13:37:15 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.273 13:37:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.273 13:37:15 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["5e041787-e267-4f05-8e05-a2de6f299155"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"5e041787-e267-4f05-8e05-a2de6f299155","zoned":false}]' 00:07:34.273 13:37:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:07:34.531 13:37:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:07:34.531 13:37:15 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:34.531 13:37:15 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.531 13:37:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.531 13:37:15 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.531 13:37:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:07:34.531 13:37:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:07:34.531 13:37:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:07:34.531 ************************************ 00:07:34.531 END TEST go_rpc 00:07:34.531 ************************************ 00:07:34.531 13:37:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:07:34.531 00:07:34.531 real 0m0.213s 00:07:34.531 user 0m0.115s 00:07:34.531 sys 0m0.052s 00:07:34.531 13:37:15 rpc.go_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.531 13:37:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.531 13:37:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:34.531 13:37:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:34.531 13:37:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.531 13:37:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.531 13:37:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.531 ************************************ 00:07:34.531 START TEST rpc_daemon_integrity 00:07:34.531 ************************************ 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.531 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:34.532 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.532 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:34.532 { 00:07:34.532 "aliases": [ 00:07:34.532 "d3835c07-c9cd-4047-80e5-45f5f7b9f50d" 00:07:34.532 ], 00:07:34.532 "assigned_rate_limits": { 00:07:34.532 "r_mbytes_per_sec": 0, 00:07:34.532 "rw_ios_per_sec": 0, 00:07:34.532 "rw_mbytes_per_sec": 0, 00:07:34.532 "w_mbytes_per_sec": 0 00:07:34.532 }, 00:07:34.532 "block_size": 512, 00:07:34.532 "claimed": false, 00:07:34.532 "driver_specific": {}, 00:07:34.532 "memory_domains": [ 00:07:34.532 { 00:07:34.532 "dma_device_id": "system", 00:07:34.532 "dma_device_type": 1 00:07:34.532 }, 00:07:34.532 { 00:07:34.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.532 "dma_device_type": 2 00:07:34.532 } 00:07:34.532 ], 00:07:34.532 "name": "Malloc3", 00:07:34.532 "num_blocks": 16384, 00:07:34.532 "product_name": "Malloc disk", 00:07:34.532 "supported_io_types": { 00:07:34.532 "abort": true, 00:07:34.532 "compare": false, 00:07:34.532 "compare_and_write": false, 00:07:34.532 "copy": true, 00:07:34.532 "flush": true, 00:07:34.532 "get_zone_info": false, 00:07:34.532 "nvme_admin": false, 00:07:34.532 "nvme_io": false, 00:07:34.532 "nvme_io_md": false, 00:07:34.532 "nvme_iov_md": false, 00:07:34.532 "read": true, 00:07:34.532 "reset": true, 00:07:34.532 "seek_data": false, 00:07:34.532 "seek_hole": false, 00:07:34.532 "unmap": true, 00:07:34.532 "write": true, 00:07:34.532 "write_zeroes": true, 00:07:34.532 "zcopy": true, 00:07:34.532 "zone_append": false, 00:07:34.532 "zone_management": false 00:07:34.532 }, 00:07:34.532 "uuid": "d3835c07-c9cd-4047-80e5-45f5f7b9f50d", 00:07:34.532 "zoned": false 00:07:34.532 } 00:07:34.532 ]' 00:07:34.532 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:34.804 [2024-11-06 13:37:15.513204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:34.804 [2024-11-06 13:37:15.513258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.804 [2024-11-06 13:37:15.513276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x100da40 00:07:34.804 [2024-11-06 13:37:15.513286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.804 [2024-11-06 13:37:15.514877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.804 [2024-11-06 13:37:15.514911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:34.804 Passthru0 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:34.804 { 00:07:34.804 "aliases": [ 00:07:34.804 "d3835c07-c9cd-4047-80e5-45f5f7b9f50d" 00:07:34.804 ], 00:07:34.804 "assigned_rate_limits": { 00:07:34.804 "r_mbytes_per_sec": 0, 00:07:34.804 "rw_ios_per_sec": 0, 00:07:34.804 "rw_mbytes_per_sec": 0, 00:07:34.804 "w_mbytes_per_sec": 0 00:07:34.804 }, 00:07:34.804 "block_size": 512, 00:07:34.804 "claim_type": "exclusive_write", 00:07:34.804 "claimed": true, 00:07:34.804 "driver_specific": {}, 00:07:34.804 "memory_domains": [ 00:07:34.804 { 00:07:34.804 "dma_device_id": "system", 00:07:34.804 "dma_device_type": 1 00:07:34.804 }, 00:07:34.804 { 00:07:34.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.804 "dma_device_type": 2 00:07:34.804 } 00:07:34.804 ], 00:07:34.804 "name": "Malloc3", 00:07:34.804 "num_blocks": 16384, 00:07:34.804 "product_name": "Malloc disk", 00:07:34.804 "supported_io_types": { 00:07:34.804 "abort": true, 00:07:34.804 "compare": false, 00:07:34.804 "compare_and_write": false, 00:07:34.804 "copy": true, 00:07:34.804 "flush": true, 00:07:34.804 "get_zone_info": false, 00:07:34.804 "nvme_admin": false, 00:07:34.804 "nvme_io": false, 00:07:34.804 "nvme_io_md": false, 00:07:34.804 "nvme_iov_md": false, 00:07:34.804 "read": true, 00:07:34.804 "reset": true, 00:07:34.804 "seek_data": false, 00:07:34.804 "seek_hole": false, 00:07:34.804 "unmap": true, 00:07:34.804 "write": true, 00:07:34.804 "write_zeroes": true, 00:07:34.804 "zcopy": true, 00:07:34.804 "zone_append": false, 00:07:34.804 "zone_management": false 00:07:34.804 }, 00:07:34.804 "uuid": "d3835c07-c9cd-4047-80e5-45f5f7b9f50d", 00:07:34.804 "zoned": false 00:07:34.804 }, 00:07:34.804 { 00:07:34.804 "aliases": [ 00:07:34.804 "690e3b0f-b5bf-58d4-9ea3-bf682ff45237" 00:07:34.804 ], 00:07:34.804 "assigned_rate_limits": { 00:07:34.804 "r_mbytes_per_sec": 0, 00:07:34.804 "rw_ios_per_sec": 0, 00:07:34.804 "rw_mbytes_per_sec": 0, 00:07:34.804 "w_mbytes_per_sec": 0 00:07:34.804 }, 00:07:34.804 "block_size": 512, 00:07:34.804 "claimed": false, 00:07:34.804 "driver_specific": { 00:07:34.804 "passthru": { 00:07:34.804 "base_bdev_name": "Malloc3", 00:07:34.804 "name": "Passthru0" 00:07:34.804 } 00:07:34.804 }, 00:07:34.804 "memory_domains": [ 00:07:34.804 { 00:07:34.804 "dma_device_id": "system", 00:07:34.804 "dma_device_type": 1 00:07:34.804 }, 00:07:34.804 { 00:07:34.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.804 "dma_device_type": 2 00:07:34.804 } 00:07:34.804 ], 00:07:34.804 "name": "Passthru0", 00:07:34.804 "num_blocks": 16384, 00:07:34.804 "product_name": "passthru", 00:07:34.804 "supported_io_types": { 00:07:34.804 "abort": true, 00:07:34.804 "compare": false, 00:07:34.804 "compare_and_write": false, 00:07:34.804 "copy": true, 00:07:34.804 "flush": true, 00:07:34.804 "get_zone_info": false, 00:07:34.804 "nvme_admin": false, 00:07:34.804 "nvme_io": false, 00:07:34.804 "nvme_io_md": false, 00:07:34.804 "nvme_iov_md": false, 00:07:34.804 "read": true, 00:07:34.804 "reset": true, 00:07:34.804 "seek_data": false, 00:07:34.804 "seek_hole": false, 00:07:34.804 "unmap": true, 00:07:34.804 "write": true, 00:07:34.804 "write_zeroes": true, 00:07:34.804 "zcopy": true, 00:07:34.804 "zone_append": false, 00:07:34.804 "zone_management": false 00:07:34.804 }, 00:07:34.804 "uuid": "690e3b0f-b5bf-58d4-9ea3-bf682ff45237", 00:07:34.804 "zoned": false 00:07:34.804 } 00:07:34.804 ]' 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.804 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:34.805 ************************************ 00:07:34.805 END TEST rpc_daemon_integrity 00:07:34.805 ************************************ 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:34.805 00:07:34.805 real 0m0.309s 00:07:34.805 user 0m0.179s 00:07:34.805 sys 0m0.057s 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.805 13:37:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.062 13:37:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:35.062 13:37:15 rpc -- rpc/rpc.sh@84 -- # killprocess 58524 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@952 -- # '[' -z 58524 ']' 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@956 -- # kill -0 58524 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@957 -- # uname 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58524 00:07:35.062 killing process with pid 58524 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58524' 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@971 -- # kill 58524 00:07:35.062 13:37:15 rpc -- common/autotest_common.sh@976 -- # wait 58524 00:07:35.629 00:07:35.629 real 0m3.380s 00:07:35.629 user 0m4.016s 00:07:35.629 sys 0m1.078s 00:07:35.629 13:37:16 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.629 13:37:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.629 ************************************ 00:07:35.629 END TEST rpc 00:07:35.629 ************************************ 00:07:35.629 13:37:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:35.629 13:37:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.629 13:37:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.629 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:07:35.629 ************************************ 00:07:35.629 START TEST skip_rpc 00:07:35.629 ************************************ 00:07:35.629 13:37:16 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:35.629 * Looking for test storage... 00:07:35.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:35.629 13:37:16 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:35.629 13:37:16 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:35.629 13:37:16 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:35.887 13:37:16 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.887 13:37:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:35.887 13:37:16 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.888 13:37:16 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:35.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.888 --rc genhtml_branch_coverage=1 00:07:35.888 --rc genhtml_function_coverage=1 00:07:35.888 --rc genhtml_legend=1 00:07:35.888 --rc geninfo_all_blocks=1 00:07:35.888 --rc geninfo_unexecuted_blocks=1 00:07:35.888 00:07:35.888 ' 00:07:35.888 13:37:16 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:35.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.888 --rc genhtml_branch_coverage=1 00:07:35.888 --rc genhtml_function_coverage=1 00:07:35.888 --rc genhtml_legend=1 00:07:35.888 --rc geninfo_all_blocks=1 00:07:35.888 --rc geninfo_unexecuted_blocks=1 00:07:35.888 00:07:35.888 ' 00:07:35.888 13:37:16 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:35.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.888 --rc genhtml_branch_coverage=1 00:07:35.888 --rc genhtml_function_coverage=1 00:07:35.888 --rc genhtml_legend=1 00:07:35.888 --rc geninfo_all_blocks=1 00:07:35.888 --rc geninfo_unexecuted_blocks=1 00:07:35.888 00:07:35.888 ' 00:07:35.888 13:37:16 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:35.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.888 --rc genhtml_branch_coverage=1 00:07:35.888 --rc genhtml_function_coverage=1 00:07:35.888 --rc genhtml_legend=1 00:07:35.888 --rc geninfo_all_blocks=1 00:07:35.888 --rc geninfo_unexecuted_blocks=1 00:07:35.888 00:07:35.888 ' 00:07:35.888 13:37:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:35.888 13:37:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:35.888 13:37:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:35.888 13:37:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.888 13:37:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.888 13:37:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.888 ************************************ 00:07:35.888 START TEST skip_rpc 00:07:35.888 ************************************ 00:07:35.888 13:37:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:07:35.888 13:37:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58793 00:07:35.888 13:37:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:35.888 13:37:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:35.888 13:37:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:35.888 [2024-11-06 13:37:16.694372] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:07:35.888 [2024-11-06 13:37:16.694459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58793 ] 00:07:36.146 [2024-11-06 13:37:16.846657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.146 [2024-11-06 13:37:16.924979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.421 2024/11/06 13:37:21 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58793 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58793 ']' 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58793 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58793 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:41.421 killing process with pid 58793 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58793' 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58793 00:07:41.421 13:37:21 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58793 00:07:41.421 00:07:41.421 real 0m5.606s 00:07:41.421 user 0m5.115s 00:07:41.421 sys 0m0.411s 00:07:41.421 ************************************ 00:07:41.421 END TEST skip_rpc 00:07:41.421 ************************************ 00:07:41.421 13:37:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.421 13:37:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.421 13:37:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:41.421 13:37:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:41.421 13:37:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.421 13:37:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.421 ************************************ 00:07:41.421 START TEST skip_rpc_with_json 00:07:41.421 ************************************ 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58886 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58886 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58886 ']' 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.421 13:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:41.681 [2024-11-06 13:37:22.366464] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:07:41.681 [2024-11-06 13:37:22.366561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58886 ] 00:07:41.681 [2024-11-06 13:37:22.503058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.681 [2024-11-06 13:37:22.579127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.621 [2024-11-06 13:37:23.306674] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:42.621 2024/11/06 13:37:23 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:07:42.621 request: 00:07:42.621 { 00:07:42.621 "method": "nvmf_get_transports", 00:07:42.621 "params": { 00:07:42.621 "trtype": "tcp" 00:07:42.621 } 00:07:42.621 } 00:07:42.621 Got JSON-RPC error response 00:07:42.621 GoRPCClient: error on JSON-RPC call 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.621 [2024-11-06 13:37:23.322715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.621 13:37:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:42.621 { 00:07:42.621 "subsystems": [ 00:07:42.621 { 00:07:42.621 "subsystem": "fsdev", 00:07:42.621 "config": [ 00:07:42.621 { 00:07:42.621 "method": "fsdev_set_opts", 00:07:42.621 "params": { 00:07:42.621 "fsdev_io_cache_size": 256, 00:07:42.621 "fsdev_io_pool_size": 65535 00:07:42.621 } 00:07:42.621 } 00:07:42.621 ] 00:07:42.621 }, 00:07:42.621 { 00:07:42.621 "subsystem": "keyring", 00:07:42.621 "config": [] 00:07:42.621 }, 00:07:42.621 { 00:07:42.621 "subsystem": "iobuf", 00:07:42.621 "config": [ 00:07:42.621 { 00:07:42.621 "method": "iobuf_set_options", 00:07:42.621 "params": { 00:07:42.621 "enable_numa": false, 00:07:42.621 "large_bufsize": 135168, 00:07:42.621 "large_pool_count": 1024, 00:07:42.621 "small_bufsize": 8192, 00:07:42.621 "small_pool_count": 8192 00:07:42.621 } 00:07:42.621 } 00:07:42.621 ] 00:07:42.621 }, 00:07:42.621 { 00:07:42.621 "subsystem": "sock", 00:07:42.621 "config": [ 00:07:42.621 { 00:07:42.621 "method": "sock_set_default_impl", 00:07:42.621 "params": { 00:07:42.621 "impl_name": "posix" 00:07:42.621 } 00:07:42.621 }, 00:07:42.621 { 00:07:42.621 "method": "sock_impl_set_options", 00:07:42.621 "params": { 00:07:42.621 "enable_ktls": false, 00:07:42.621 "enable_placement_id": 0, 00:07:42.621 "enable_quickack": false, 00:07:42.621 "enable_recv_pipe": true, 00:07:42.621 "enable_zerocopy_send_client": false, 00:07:42.621 "enable_zerocopy_send_server": true, 00:07:42.621 "impl_name": "ssl", 00:07:42.621 "recv_buf_size": 4096, 00:07:42.621 "send_buf_size": 4096, 00:07:42.621 "tls_version": 0, 00:07:42.621 "zerocopy_threshold": 0 00:07:42.621 } 00:07:42.621 }, 00:07:42.621 { 00:07:42.621 "method": "sock_impl_set_options", 00:07:42.621 "params": { 00:07:42.621 "enable_ktls": false, 00:07:42.621 "enable_placement_id": 0, 00:07:42.621 "enable_quickack": false, 00:07:42.621 "enable_recv_pipe": true, 00:07:42.621 "enable_zerocopy_send_client": false, 00:07:42.621 "enable_zerocopy_send_server": true, 00:07:42.621 "impl_name": "posix", 00:07:42.621 "recv_buf_size": 2097152, 00:07:42.621 "send_buf_size": 2097152, 00:07:42.621 "tls_version": 0, 00:07:42.621 "zerocopy_threshold": 0 00:07:42.621 } 00:07:42.621 } 00:07:42.621 ] 00:07:42.621 }, 00:07:42.621 { 00:07:42.621 "subsystem": "vmd", 00:07:42.621 "config": [] 00:07:42.621 }, 00:07:42.621 { 00:07:42.621 "subsystem": "accel", 00:07:42.621 "config": [ 00:07:42.621 { 00:07:42.621 "method": "accel_set_options", 00:07:42.621 "params": { 00:07:42.621 "buf_count": 2048, 00:07:42.621 "large_cache_size": 16, 00:07:42.621 "sequence_count": 2048, 00:07:42.621 "small_cache_size": 128, 00:07:42.621 "task_count": 2048 00:07:42.621 } 00:07:42.621 } 00:07:42.621 ] 00:07:42.621 }, 00:07:42.621 { 00:07:42.621 "subsystem": "bdev", 00:07:42.621 "config": [ 00:07:42.621 { 00:07:42.621 "method": "bdev_set_options", 00:07:42.621 "params": { 00:07:42.621 "bdev_auto_examine": true, 00:07:42.621 "bdev_io_cache_size": 256, 00:07:42.622 "bdev_io_pool_size": 65535, 00:07:42.622 "iobuf_large_cache_size": 16, 00:07:42.622 "iobuf_small_cache_size": 128 00:07:42.622 } 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "method": "bdev_raid_set_options", 00:07:42.622 "params": { 00:07:42.622 "process_max_bandwidth_mb_sec": 0, 00:07:42.622 "process_window_size_kb": 1024 00:07:42.622 } 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "method": "bdev_iscsi_set_options", 00:07:42.622 "params": { 00:07:42.622 "timeout_sec": 30 00:07:42.622 } 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "method": "bdev_nvme_set_options", 00:07:42.622 "params": { 00:07:42.622 "action_on_timeout": "none", 00:07:42.622 "allow_accel_sequence": false, 00:07:42.622 "arbitration_burst": 0, 00:07:42.622 "bdev_retry_count": 3, 00:07:42.622 "ctrlr_loss_timeout_sec": 0, 00:07:42.622 "delay_cmd_submit": true, 00:07:42.622 "dhchap_dhgroups": [ 00:07:42.622 "null", 00:07:42.622 "ffdhe2048", 00:07:42.622 "ffdhe3072", 00:07:42.622 "ffdhe4096", 00:07:42.622 "ffdhe6144", 00:07:42.622 "ffdhe8192" 00:07:42.622 ], 00:07:42.622 "dhchap_digests": [ 00:07:42.622 "sha256", 00:07:42.622 "sha384", 00:07:42.622 "sha512" 00:07:42.622 ], 00:07:42.622 "disable_auto_failback": false, 00:07:42.622 "fast_io_fail_timeout_sec": 0, 00:07:42.622 "generate_uuids": false, 00:07:42.622 "high_priority_weight": 0, 00:07:42.622 "io_path_stat": false, 00:07:42.622 "io_queue_requests": 0, 00:07:42.622 "keep_alive_timeout_ms": 10000, 00:07:42.622 "low_priority_weight": 0, 00:07:42.622 "medium_priority_weight": 0, 00:07:42.622 "nvme_adminq_poll_period_us": 10000, 00:07:42.622 "nvme_error_stat": false, 00:07:42.622 "nvme_ioq_poll_period_us": 0, 00:07:42.622 "rdma_cm_event_timeout_ms": 0, 00:07:42.622 "rdma_max_cq_size": 0, 00:07:42.622 "rdma_srq_size": 0, 00:07:42.622 "reconnect_delay_sec": 0, 00:07:42.622 "timeout_admin_us": 0, 00:07:42.622 "timeout_us": 0, 00:07:42.622 "transport_ack_timeout": 0, 00:07:42.622 "transport_retry_count": 4, 00:07:42.622 "transport_tos": 0 00:07:42.622 } 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "method": "bdev_nvme_set_hotplug", 00:07:42.622 "params": { 00:07:42.622 "enable": false, 00:07:42.622 "period_us": 100000 00:07:42.622 } 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "method": "bdev_wait_for_examine" 00:07:42.622 } 00:07:42.622 ] 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "subsystem": "scsi", 00:07:42.622 "config": null 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "subsystem": "scheduler", 00:07:42.622 "config": [ 00:07:42.622 { 00:07:42.622 "method": "framework_set_scheduler", 00:07:42.622 "params": { 00:07:42.622 "name": "static" 00:07:42.622 } 00:07:42.622 } 00:07:42.622 ] 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "subsystem": "vhost_scsi", 00:07:42.622 "config": [] 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "subsystem": "vhost_blk", 00:07:42.622 "config": [] 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "subsystem": "ublk", 00:07:42.622 "config": [] 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "subsystem": "nbd", 00:07:42.622 "config": [] 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "subsystem": "nvmf", 00:07:42.622 "config": [ 00:07:42.622 { 00:07:42.622 "method": "nvmf_set_config", 00:07:42.622 "params": { 00:07:42.622 "admin_cmd_passthru": { 00:07:42.622 "identify_ctrlr": false 00:07:42.622 }, 00:07:42.622 "dhchap_dhgroups": [ 00:07:42.622 "null", 00:07:42.622 "ffdhe2048", 00:07:42.622 "ffdhe3072", 00:07:42.622 "ffdhe4096", 00:07:42.622 "ffdhe6144", 00:07:42.622 "ffdhe8192" 00:07:42.622 ], 00:07:42.622 "dhchap_digests": [ 00:07:42.622 "sha256", 00:07:42.622 "sha384", 00:07:42.622 "sha512" 00:07:42.622 ], 00:07:42.622 "discovery_filter": "match_any" 00:07:42.622 } 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "method": "nvmf_set_max_subsystems", 00:07:42.622 "params": { 00:07:42.622 "max_subsystems": 1024 00:07:42.622 } 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "method": "nvmf_set_crdt", 00:07:42.622 "params": { 00:07:42.622 "crdt1": 0, 00:07:42.622 "crdt2": 0, 00:07:42.622 "crdt3": 0 00:07:42.622 } 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "method": "nvmf_create_transport", 00:07:42.622 "params": { 00:07:42.622 "abort_timeout_sec": 1, 00:07:42.622 "ack_timeout": 0, 00:07:42.622 "buf_cache_size": 4294967295, 00:07:42.622 "c2h_success": true, 00:07:42.622 "data_wr_pool_size": 0, 00:07:42.622 "dif_insert_or_strip": false, 00:07:42.622 "in_capsule_data_size": 4096, 00:07:42.622 "io_unit_size": 131072, 00:07:42.622 "max_aq_depth": 128, 00:07:42.622 "max_io_qpairs_per_ctrlr": 127, 00:07:42.622 "max_io_size": 131072, 00:07:42.622 "max_queue_depth": 128, 00:07:42.622 "num_shared_buffers": 511, 00:07:42.622 "sock_priority": 0, 00:07:42.622 "trtype": "TCP", 00:07:42.622 "zcopy": false 00:07:42.622 } 00:07:42.622 } 00:07:42.622 ] 00:07:42.622 }, 00:07:42.622 { 00:07:42.622 "subsystem": "iscsi", 00:07:42.622 "config": [ 00:07:42.622 { 00:07:42.622 "method": "iscsi_set_options", 00:07:42.622 "params": { 00:07:42.622 "allow_duplicated_isid": false, 00:07:42.622 "chap_group": 0, 00:07:42.622 "data_out_pool_size": 2048, 00:07:42.622 "default_time2retain": 20, 00:07:42.622 "default_time2wait": 2, 00:07:42.622 "disable_chap": false, 00:07:42.622 "error_recovery_level": 0, 00:07:42.622 "first_burst_length": 8192, 00:07:42.622 "immediate_data": true, 00:07:42.622 "immediate_data_pool_size": 16384, 00:07:42.622 "max_connections_per_session": 2, 00:07:42.622 "max_large_datain_per_connection": 64, 00:07:42.622 "max_queue_depth": 64, 00:07:42.622 "max_r2t_per_connection": 4, 00:07:42.622 "max_sessions": 128, 00:07:42.622 "mutual_chap": false, 00:07:42.622 "node_base": "iqn.2016-06.io.spdk", 00:07:42.622 "nop_in_interval": 30, 00:07:42.622 "nop_timeout": 60, 00:07:42.622 "pdu_pool_size": 36864, 00:07:42.622 "require_chap": false 00:07:42.622 } 00:07:42.622 } 00:07:42.622 ] 00:07:42.622 } 00:07:42.622 ] 00:07:42.622 } 00:07:42.622 13:37:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:42.622 13:37:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58886 00:07:42.622 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58886 ']' 00:07:42.622 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58886 00:07:42.622 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:42.622 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:42.622 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58886 00:07:42.890 killing process with pid 58886 00:07:42.890 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:42.890 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:42.890 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58886' 00:07:42.890 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58886 00:07:42.890 13:37:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58886 00:07:43.458 13:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58931 00:07:43.458 13:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:43.458 13:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58931 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58931 ']' 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58931 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58931 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58931' 00:07:48.770 killing process with pid 58931 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58931 00:07:48.770 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58931 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:49.028 00:07:49.028 real 0m7.438s 00:07:49.028 user 0m6.930s 00:07:49.028 sys 0m0.920s 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:49.028 ************************************ 00:07:49.028 END TEST skip_rpc_with_json 00:07:49.028 ************************************ 00:07:49.028 13:37:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:49.028 13:37:29 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.028 13:37:29 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.028 13:37:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.028 ************************************ 00:07:49.028 START TEST skip_rpc_with_delay 00:07:49.028 ************************************ 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:49.028 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:49.029 [2024-11-06 13:37:29.874582] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:49.029 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:49.029 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.029 ************************************ 00:07:49.029 END TEST skip_rpc_with_delay 00:07:49.029 ************************************ 00:07:49.029 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.029 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.029 00:07:49.029 real 0m0.090s 00:07:49.029 user 0m0.047s 00:07:49.029 sys 0m0.042s 00:07:49.029 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.029 13:37:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:49.029 13:37:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:49.029 13:37:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:49.029 13:37:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:49.029 13:37:29 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.029 13:37:29 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.029 13:37:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.287 ************************************ 00:07:49.287 START TEST exit_on_failed_rpc_init 00:07:49.287 ************************************ 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59040 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59040 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 59040 ']' 00:07:49.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.287 13:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:49.287 [2024-11-06 13:37:30.027070] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:07:49.287 [2024-11-06 13:37:30.027182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59040 ] 00:07:49.287 [2024-11-06 13:37:30.181322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.546 [2024-11-06 13:37:30.262401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:50.113 13:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:50.113 [2024-11-06 13:37:31.039152] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:07:50.113 [2024-11-06 13:37:31.039246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59069 ] 00:07:50.373 [2024-11-06 13:37:31.192288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.373 [2024-11-06 13:37:31.270726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.373 [2024-11-06 13:37:31.270821] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:50.373 [2024-11-06 13:37:31.270834] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:50.373 [2024-11-06 13:37:31.270843] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.632 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:50.632 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.632 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:50.632 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:50.632 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:50.632 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.632 13:37:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:50.632 13:37:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59040 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 59040 ']' 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 59040 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59040 00:07:50.633 killing process with pid 59040 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59040' 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 59040 00:07:50.633 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 59040 00:07:51.200 ************************************ 00:07:51.200 END TEST exit_on_failed_rpc_init 00:07:51.200 ************************************ 00:07:51.200 00:07:51.200 real 0m1.993s 00:07:51.200 user 0m2.109s 00:07:51.200 sys 0m0.589s 00:07:51.200 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.200 13:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:51.200 13:37:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:51.200 00:07:51.200 real 0m15.620s 00:07:51.200 user 0m14.408s 00:07:51.200 sys 0m2.261s 00:07:51.200 13:37:32 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.200 ************************************ 00:07:51.200 END TEST skip_rpc 00:07:51.200 ************************************ 00:07:51.200 13:37:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.200 13:37:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:51.200 13:37:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.200 13:37:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.200 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.200 ************************************ 00:07:51.200 START TEST rpc_client 00:07:51.200 ************************************ 00:07:51.200 13:37:32 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:51.458 * Looking for test storage... 00:07:51.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:51.458 13:37:32 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.458 13:37:32 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.458 13:37:32 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.458 13:37:32 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.458 13:37:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:51.458 13:37:32 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.458 13:37:32 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.458 --rc genhtml_branch_coverage=1 00:07:51.458 --rc genhtml_function_coverage=1 00:07:51.458 --rc genhtml_legend=1 00:07:51.458 --rc geninfo_all_blocks=1 00:07:51.458 --rc geninfo_unexecuted_blocks=1 00:07:51.458 00:07:51.458 ' 00:07:51.459 13:37:32 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.459 --rc genhtml_branch_coverage=1 00:07:51.459 --rc genhtml_function_coverage=1 00:07:51.459 --rc genhtml_legend=1 00:07:51.459 --rc geninfo_all_blocks=1 00:07:51.459 --rc geninfo_unexecuted_blocks=1 00:07:51.459 00:07:51.459 ' 00:07:51.459 13:37:32 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.459 --rc genhtml_branch_coverage=1 00:07:51.459 --rc genhtml_function_coverage=1 00:07:51.459 --rc genhtml_legend=1 00:07:51.459 --rc geninfo_all_blocks=1 00:07:51.459 --rc geninfo_unexecuted_blocks=1 00:07:51.459 00:07:51.459 ' 00:07:51.459 13:37:32 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.459 --rc genhtml_branch_coverage=1 00:07:51.459 --rc genhtml_function_coverage=1 00:07:51.459 --rc genhtml_legend=1 00:07:51.459 --rc geninfo_all_blocks=1 00:07:51.459 --rc geninfo_unexecuted_blocks=1 00:07:51.459 00:07:51.459 ' 00:07:51.459 13:37:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:51.459 OK 00:07:51.459 13:37:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:51.459 00:07:51.459 real 0m0.254s 00:07:51.459 user 0m0.142s 00:07:51.459 sys 0m0.128s 00:07:51.459 13:37:32 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.459 13:37:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:51.459 ************************************ 00:07:51.459 END TEST rpc_client 00:07:51.459 ************************************ 00:07:51.459 13:37:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:51.459 13:37:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.459 13:37:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.459 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.459 ************************************ 00:07:51.459 START TEST json_config 00:07:51.459 ************************************ 00:07:51.459 13:37:32 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.719 13:37:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.719 13:37:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.719 13:37:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.719 13:37:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.719 13:37:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.719 13:37:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.719 13:37:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.719 13:37:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.719 13:37:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.719 13:37:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.719 13:37:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.719 13:37:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:51.719 13:37:32 json_config -- scripts/common.sh@345 -- # : 1 00:07:51.719 13:37:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.719 13:37:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.719 13:37:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:51.719 13:37:32 json_config -- scripts/common.sh@353 -- # local d=1 00:07:51.719 13:37:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.719 13:37:32 json_config -- scripts/common.sh@355 -- # echo 1 00:07:51.719 13:37:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.719 13:37:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:51.719 13:37:32 json_config -- scripts/common.sh@353 -- # local d=2 00:07:51.719 13:37:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.719 13:37:32 json_config -- scripts/common.sh@355 -- # echo 2 00:07:51.719 13:37:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.719 13:37:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.719 13:37:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.719 13:37:32 json_config -- scripts/common.sh@368 -- # return 0 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.719 --rc genhtml_branch_coverage=1 00:07:51.719 --rc genhtml_function_coverage=1 00:07:51.719 --rc genhtml_legend=1 00:07:51.719 --rc geninfo_all_blocks=1 00:07:51.719 --rc geninfo_unexecuted_blocks=1 00:07:51.719 00:07:51.719 ' 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.719 --rc genhtml_branch_coverage=1 00:07:51.719 --rc genhtml_function_coverage=1 00:07:51.719 --rc genhtml_legend=1 00:07:51.719 --rc geninfo_all_blocks=1 00:07:51.719 --rc geninfo_unexecuted_blocks=1 00:07:51.719 00:07:51.719 ' 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.719 --rc genhtml_branch_coverage=1 00:07:51.719 --rc genhtml_function_coverage=1 00:07:51.719 --rc genhtml_legend=1 00:07:51.719 --rc geninfo_all_blocks=1 00:07:51.719 --rc geninfo_unexecuted_blocks=1 00:07:51.719 00:07:51.719 ' 00:07:51.719 13:37:32 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.719 --rc genhtml_branch_coverage=1 00:07:51.719 --rc genhtml_function_coverage=1 00:07:51.719 --rc genhtml_legend=1 00:07:51.719 --rc geninfo_all_blocks=1 00:07:51.719 --rc geninfo_unexecuted_blocks=1 00:07:51.719 00:07:51.719 ' 00:07:51.719 13:37:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.719 13:37:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.719 13:37:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.719 13:37:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.719 13:37:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.719 13:37:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.719 13:37:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.719 13:37:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.720 13:37:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.720 13:37:32 json_config -- paths/export.sh@5 -- # export PATH 00:07:51.720 13:37:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@51 -- # : 0 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.720 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.720 13:37:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:51.720 INFO: JSON configuration test init 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.720 13:37:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:51.720 13:37:32 json_config -- json_config/common.sh@9 -- # local app=target 00:07:51.720 13:37:32 json_config -- json_config/common.sh@10 -- # shift 00:07:51.720 13:37:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:51.720 13:37:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:51.720 13:37:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:51.720 13:37:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:51.720 13:37:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:51.720 13:37:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59213 00:07:51.720 13:37:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:51.720 Waiting for target to run... 00:07:51.720 13:37:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:51.720 13:37:32 json_config -- json_config/common.sh@25 -- # waitforlisten 59213 /var/tmp/spdk_tgt.sock 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@833 -- # '[' -z 59213 ']' 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:51.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.720 13:37:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.979 [2024-11-06 13:37:32.686157] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:07:51.979 [2024-11-06 13:37:32.686296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59213 ] 00:07:52.546 [2024-11-06 13:37:33.277106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.546 [2024-11-06 13:37:33.351181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.805 13:37:33 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.805 13:37:33 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:52.805 00:07:52.805 13:37:33 json_config -- json_config/common.sh@26 -- # echo '' 00:07:52.805 13:37:33 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:52.805 13:37:33 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:52.805 13:37:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.805 13:37:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.805 13:37:33 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:52.805 13:37:33 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:52.805 13:37:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.805 13:37:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.805 13:37:33 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:52.805 13:37:33 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:52.805 13:37:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:53.378 13:37:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.378 13:37:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:53.378 13:37:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:53.378 13:37:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@54 -- # sort 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:53.637 13:37:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:53.637 13:37:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:53.637 13:37:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.637 13:37:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:53.637 13:37:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:53.637 13:37:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:53.895 MallocForNvmf0 00:07:54.154 13:37:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:54.154 13:37:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:54.154 MallocForNvmf1 00:07:54.154 13:37:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:54.154 13:37:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:54.412 [2024-11-06 13:37:35.308445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.412 13:37:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:54.412 13:37:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:54.669 13:37:35 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:54.669 13:37:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:54.928 13:37:35 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:54.928 13:37:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:55.495 13:37:36 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:55.495 13:37:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:55.495 [2024-11-06 13:37:36.371329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:55.495 13:37:36 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:55.495 13:37:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.495 13:37:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:55.754 13:37:36 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:55.754 13:37:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.754 13:37:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:55.754 13:37:36 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:55.754 13:37:36 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:55.754 13:37:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:56.013 MallocBdevForConfigChangeCheck 00:07:56.013 13:37:36 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:56.013 13:37:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.013 13:37:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:56.013 13:37:36 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:56.013 13:37:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:56.272 INFO: shutting down applications... 00:07:56.272 13:37:37 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:56.272 13:37:37 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:56.272 13:37:37 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:56.272 13:37:37 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:56.272 13:37:37 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:56.841 Calling clear_iscsi_subsystem 00:07:56.841 Calling clear_nvmf_subsystem 00:07:56.841 Calling clear_nbd_subsystem 00:07:56.841 Calling clear_ublk_subsystem 00:07:56.841 Calling clear_vhost_blk_subsystem 00:07:56.841 Calling clear_vhost_scsi_subsystem 00:07:56.841 Calling clear_bdev_subsystem 00:07:56.841 13:37:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:56.841 13:37:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:56.841 13:37:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:56.841 13:37:37 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:56.841 13:37:37 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:56.841 13:37:37 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:57.144 13:37:37 json_config -- json_config/json_config.sh@352 -- # break 00:07:57.144 13:37:37 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:57.144 13:37:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:57.144 13:37:37 json_config -- json_config/common.sh@31 -- # local app=target 00:07:57.144 13:37:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:57.144 13:37:37 json_config -- json_config/common.sh@35 -- # [[ -n 59213 ]] 00:07:57.144 13:37:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59213 00:07:57.144 13:37:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:57.145 13:37:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:57.145 13:37:37 json_config -- json_config/common.sh@41 -- # kill -0 59213 00:07:57.145 13:37:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:57.712 13:37:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:57.712 13:37:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:57.712 13:37:38 json_config -- json_config/common.sh@41 -- # kill -0 59213 00:07:57.712 13:37:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:57.712 13:37:38 json_config -- json_config/common.sh@43 -- # break 00:07:57.712 13:37:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:57.712 SPDK target shutdown done 00:07:57.712 13:37:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:57.712 INFO: relaunching applications... 00:07:57.712 13:37:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:57.712 13:37:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:57.712 13:37:38 json_config -- json_config/common.sh@9 -- # local app=target 00:07:57.712 13:37:38 json_config -- json_config/common.sh@10 -- # shift 00:07:57.712 13:37:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:57.712 13:37:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:57.712 13:37:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:57.712 13:37:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:57.712 13:37:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:57.712 13:37:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59498 00:07:57.712 13:37:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:57.712 13:37:38 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:57.712 Waiting for target to run... 00:07:57.712 13:37:38 json_config -- json_config/common.sh@25 -- # waitforlisten 59498 /var/tmp/spdk_tgt.sock 00:07:57.713 13:37:38 json_config -- common/autotest_common.sh@833 -- # '[' -z 59498 ']' 00:07:57.713 13:37:38 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:57.713 13:37:38 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:57.713 13:37:38 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:57.713 13:37:38 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.713 13:37:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:57.713 [2024-11-06 13:37:38.505623] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:07:57.713 [2024-11-06 13:37:38.505723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59498 ] 00:07:58.280 [2024-11-06 13:37:39.058516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.280 [2024-11-06 13:37:39.119125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.847 [2024-11-06 13:37:39.483905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.848 [2024-11-06 13:37:39.515981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:58.848 13:37:39 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:58.848 13:37:39 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:58.848 00:07:58.848 13:37:39 json_config -- json_config/common.sh@26 -- # echo '' 00:07:58.848 13:37:39 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:58.848 INFO: Checking if target configuration is the same... 00:07:58.848 13:37:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:58.848 13:37:39 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:58.848 13:37:39 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:58.848 13:37:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:58.848 + '[' 2 -ne 2 ']' 00:07:58.848 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:58.848 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:58.848 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:58.848 +++ basename /dev/fd/62 00:07:58.848 ++ mktemp /tmp/62.XXX 00:07:58.848 + tmp_file_1=/tmp/62.EfI 00:07:58.848 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:58.848 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:58.848 + tmp_file_2=/tmp/spdk_tgt_config.json.GuY 00:07:58.848 + ret=0 00:07:58.848 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:59.106 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:59.106 + diff -u /tmp/62.EfI /tmp/spdk_tgt_config.json.GuY 00:07:59.106 INFO: JSON config files are the same 00:07:59.106 + echo 'INFO: JSON config files are the same' 00:07:59.106 + rm /tmp/62.EfI /tmp/spdk_tgt_config.json.GuY 00:07:59.106 + exit 0 00:07:59.106 13:37:40 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:59.106 INFO: changing configuration and checking if this can be detected... 00:07:59.106 13:37:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:59.106 13:37:40 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:59.106 13:37:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:59.365 13:37:40 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:59.365 13:37:40 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:59.365 13:37:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:59.365 + '[' 2 -ne 2 ']' 00:07:59.365 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:59.365 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:59.365 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:59.365 +++ basename /dev/fd/62 00:07:59.365 ++ mktemp /tmp/62.XXX 00:07:59.365 + tmp_file_1=/tmp/62.VYy 00:07:59.365 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:59.365 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:59.623 + tmp_file_2=/tmp/spdk_tgt_config.json.Lio 00:07:59.623 + ret=0 00:07:59.623 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:59.885 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:59.885 + diff -u /tmp/62.VYy /tmp/spdk_tgt_config.json.Lio 00:07:59.885 + ret=1 00:07:59.885 + echo '=== Start of file: /tmp/62.VYy ===' 00:07:59.885 + cat /tmp/62.VYy 00:07:59.885 + echo '=== End of file: /tmp/62.VYy ===' 00:07:59.885 + echo '' 00:07:59.885 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Lio ===' 00:07:59.885 + cat /tmp/spdk_tgt_config.json.Lio 00:07:59.885 + echo '=== End of file: /tmp/spdk_tgt_config.json.Lio ===' 00:07:59.885 + echo '' 00:07:59.885 + rm /tmp/62.VYy /tmp/spdk_tgt_config.json.Lio 00:07:59.885 + exit 1 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:59.885 INFO: configuration change detected. 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:59.885 13:37:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.885 13:37:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@324 -- # [[ -n 59498 ]] 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:59.885 13:37:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.885 13:37:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:59.885 13:37:40 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:59.885 13:37:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:59.885 13:37:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.144 13:37:40 json_config -- json_config/json_config.sh@330 -- # killprocess 59498 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@952 -- # '[' -z 59498 ']' 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@956 -- # kill -0 59498 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@957 -- # uname 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59498 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:00.144 killing process with pid 59498 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59498' 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@971 -- # kill 59498 00:08:00.144 13:37:40 json_config -- common/autotest_common.sh@976 -- # wait 59498 00:08:00.402 13:37:41 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:00.402 13:37:41 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:00.402 13:37:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.402 13:37:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.402 INFO: Success 00:08:00.402 13:37:41 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:00.402 13:37:41 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:00.402 00:08:00.402 real 0m8.885s 00:08:00.402 user 0m12.083s 00:08:00.402 sys 0m2.575s 00:08:00.402 13:37:41 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.402 13:37:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.402 ************************************ 00:08:00.402 END TEST json_config 00:08:00.402 ************************************ 00:08:00.402 13:37:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:00.402 13:37:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.402 13:37:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.402 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:08:00.663 ************************************ 00:08:00.663 START TEST json_config_extra_key 00:08:00.663 ************************************ 00:08:00.663 13:37:41 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.664 --rc genhtml_branch_coverage=1 00:08:00.664 --rc genhtml_function_coverage=1 00:08:00.664 --rc genhtml_legend=1 00:08:00.664 --rc geninfo_all_blocks=1 00:08:00.664 --rc geninfo_unexecuted_blocks=1 00:08:00.664 00:08:00.664 ' 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.664 --rc genhtml_branch_coverage=1 00:08:00.664 --rc genhtml_function_coverage=1 00:08:00.664 --rc genhtml_legend=1 00:08:00.664 --rc geninfo_all_blocks=1 00:08:00.664 --rc geninfo_unexecuted_blocks=1 00:08:00.664 00:08:00.664 ' 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.664 --rc genhtml_branch_coverage=1 00:08:00.664 --rc genhtml_function_coverage=1 00:08:00.664 --rc genhtml_legend=1 00:08:00.664 --rc geninfo_all_blocks=1 00:08:00.664 --rc geninfo_unexecuted_blocks=1 00:08:00.664 00:08:00.664 ' 00:08:00.664 13:37:41 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.664 --rc genhtml_branch_coverage=1 00:08:00.664 --rc genhtml_function_coverage=1 00:08:00.664 --rc genhtml_legend=1 00:08:00.664 --rc geninfo_all_blocks=1 00:08:00.664 --rc geninfo_unexecuted_blocks=1 00:08:00.664 00:08:00.664 ' 00:08:00.664 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.664 13:37:41 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.664 13:37:41 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.664 13:37:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.664 13:37:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.664 13:37:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.664 13:37:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:00.665 13:37:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.665 13:37:41 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:00.665 INFO: launching applications... 00:08:00.665 13:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59682 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:00.665 Waiting for target to run... 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59682 /var/tmp/spdk_tgt.sock 00:08:00.665 13:37:41 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 59682 ']' 00:08:00.665 13:37:41 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:00.665 13:37:41 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:00.665 13:37:41 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.665 13:37:41 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:00.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:00.665 13:37:41 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.665 13:37:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:00.959 [2024-11-06 13:37:41.655342] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:00.959 [2024-11-06 13:37:41.655453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:08:01.526 [2024-11-06 13:37:42.200624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.526 [2024-11-06 13:37:42.261361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.786 00:08:01.786 INFO: shutting down applications... 00:08:01.786 13:37:42 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.786 13:37:42 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:01.786 13:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:01.786 13:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59682 ]] 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59682 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59682 00:08:01.786 13:37:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:02.353 13:37:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:02.353 13:37:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:02.353 13:37:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59682 00:08:02.353 13:37:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:02.923 13:37:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:02.923 13:37:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:02.923 13:37:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59682 00:08:02.923 13:37:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:02.923 13:37:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:02.923 13:37:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:02.923 SPDK target shutdown done 00:08:02.923 13:37:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:02.923 Success 00:08:02.923 13:37:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:02.923 00:08:02.923 real 0m2.275s 00:08:02.923 user 0m1.611s 00:08:02.923 sys 0m0.658s 00:08:02.923 13:37:43 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:02.923 13:37:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:02.923 ************************************ 00:08:02.923 END TEST json_config_extra_key 00:08:02.923 ************************************ 00:08:02.923 13:37:43 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:02.923 13:37:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:02.923 13:37:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:02.923 13:37:43 -- common/autotest_common.sh@10 -- # set +x 00:08:02.923 ************************************ 00:08:02.923 START TEST alias_rpc 00:08:02.923 ************************************ 00:08:02.923 13:37:43 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:02.923 * Looking for test storage... 00:08:02.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:02.923 13:37:43 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:02.923 13:37:43 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:02.923 13:37:43 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.182 13:37:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:03.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.182 --rc genhtml_branch_coverage=1 00:08:03.182 --rc genhtml_function_coverage=1 00:08:03.182 --rc genhtml_legend=1 00:08:03.182 --rc geninfo_all_blocks=1 00:08:03.182 --rc geninfo_unexecuted_blocks=1 00:08:03.182 00:08:03.182 ' 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:03.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.182 --rc genhtml_branch_coverage=1 00:08:03.182 --rc genhtml_function_coverage=1 00:08:03.182 --rc genhtml_legend=1 00:08:03.182 --rc geninfo_all_blocks=1 00:08:03.182 --rc geninfo_unexecuted_blocks=1 00:08:03.182 00:08:03.182 ' 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:03.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.182 --rc genhtml_branch_coverage=1 00:08:03.182 --rc genhtml_function_coverage=1 00:08:03.182 --rc genhtml_legend=1 00:08:03.182 --rc geninfo_all_blocks=1 00:08:03.182 --rc geninfo_unexecuted_blocks=1 00:08:03.182 00:08:03.182 ' 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:03.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.182 --rc genhtml_branch_coverage=1 00:08:03.182 --rc genhtml_function_coverage=1 00:08:03.182 --rc genhtml_legend=1 00:08:03.182 --rc geninfo_all_blocks=1 00:08:03.182 --rc geninfo_unexecuted_blocks=1 00:08:03.182 00:08:03.182 ' 00:08:03.182 13:37:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:03.182 13:37:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:03.182 13:37:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59773 00:08:03.182 13:37:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59773 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 59773 ']' 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.182 13:37:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.182 [2024-11-06 13:37:43.992679] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:03.182 [2024-11-06 13:37:43.992783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59773 ] 00:08:03.441 [2024-11-06 13:37:44.146770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.441 [2024-11-06 13:37:44.227020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.018 13:37:44 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.018 13:37:44 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:04.018 13:37:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:04.584 13:37:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59773 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 59773 ']' 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 59773 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59773 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:04.584 killing process with pid 59773 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59773' 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@971 -- # kill 59773 00:08:04.584 13:37:45 alias_rpc -- common/autotest_common.sh@976 -- # wait 59773 00:08:05.150 00:08:05.150 real 0m2.152s 00:08:05.150 user 0m2.201s 00:08:05.150 sys 0m0.666s 00:08:05.150 13:37:45 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:05.150 13:37:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.150 ************************************ 00:08:05.150 END TEST alias_rpc 00:08:05.150 ************************************ 00:08:05.150 13:37:45 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:08:05.150 13:37:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.150 13:37:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:05.150 13:37:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:05.151 13:37:45 -- common/autotest_common.sh@10 -- # set +x 00:08:05.151 ************************************ 00:08:05.151 START TEST dpdk_mem_utility 00:08:05.151 ************************************ 00:08:05.151 13:37:45 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.151 * Looking for test storage... 00:08:05.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:05.151 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:05.151 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:08:05.151 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.410 13:37:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:05.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.410 --rc genhtml_branch_coverage=1 00:08:05.410 --rc genhtml_function_coverage=1 00:08:05.410 --rc genhtml_legend=1 00:08:05.410 --rc geninfo_all_blocks=1 00:08:05.410 --rc geninfo_unexecuted_blocks=1 00:08:05.410 00:08:05.410 ' 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:05.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.410 --rc genhtml_branch_coverage=1 00:08:05.410 --rc genhtml_function_coverage=1 00:08:05.410 --rc genhtml_legend=1 00:08:05.410 --rc geninfo_all_blocks=1 00:08:05.410 --rc geninfo_unexecuted_blocks=1 00:08:05.410 00:08:05.410 ' 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:05.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.410 --rc genhtml_branch_coverage=1 00:08:05.410 --rc genhtml_function_coverage=1 00:08:05.410 --rc genhtml_legend=1 00:08:05.410 --rc geninfo_all_blocks=1 00:08:05.410 --rc geninfo_unexecuted_blocks=1 00:08:05.410 00:08:05.410 ' 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:05.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.410 --rc genhtml_branch_coverage=1 00:08:05.410 --rc genhtml_function_coverage=1 00:08:05.410 --rc genhtml_legend=1 00:08:05.410 --rc geninfo_all_blocks=1 00:08:05.410 --rc geninfo_unexecuted_blocks=1 00:08:05.410 00:08:05.410 ' 00:08:05.410 13:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:05.410 13:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59873 00:08:05.410 13:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:05.410 13:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59873 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 59873 ']' 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:05.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:05.410 13:37:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:05.410 [2024-11-06 13:37:46.220441] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:05.410 [2024-11-06 13:37:46.220537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59873 ] 00:08:05.669 [2024-11-06 13:37:46.357017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.669 [2024-11-06 13:37:46.435215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.237 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.237 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:08:06.237 13:37:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:06.237 13:37:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:06.237 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.237 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:06.498 { 00:08:06.498 "filename": "/tmp/spdk_mem_dump.txt" 00:08:06.498 } 00:08:06.498 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.498 13:37:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:06.498 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:06.498 1 heaps totaling size 818.000000 MiB 00:08:06.498 size: 818.000000 MiB heap id: 0 00:08:06.498 end heaps---------- 00:08:06.498 9 mempools totaling size 603.782043 MiB 00:08:06.498 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:06.498 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:06.498 size: 100.555481 MiB name: bdev_io_59873 00:08:06.498 size: 50.003479 MiB name: msgpool_59873 00:08:06.498 size: 36.509338 MiB name: fsdev_io_59873 00:08:06.498 size: 21.763794 MiB name: PDU_Pool 00:08:06.498 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:06.498 size: 4.133484 MiB name: evtpool_59873 00:08:06.498 size: 0.026123 MiB name: Session_Pool 00:08:06.498 end mempools------- 00:08:06.498 6 memzones totaling size 4.142822 MiB 00:08:06.498 size: 1.000366 MiB name: RG_ring_0_59873 00:08:06.498 size: 1.000366 MiB name: RG_ring_1_59873 00:08:06.498 size: 1.000366 MiB name: RG_ring_4_59873 00:08:06.498 size: 1.000366 MiB name: RG_ring_5_59873 00:08:06.498 size: 0.125366 MiB name: RG_ring_2_59873 00:08:06.498 size: 0.015991 MiB name: RG_ring_3_59873 00:08:06.498 end memzones------- 00:08:06.498 13:37:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:06.498 heap id: 0 total size: 818.000000 MiB number of busy elements: 232 number of free elements: 15 00:08:06.498 list of free elements. size: 10.818054 MiB 00:08:06.498 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:06.498 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:06.498 element at address: 0x200000400000 with size: 0.996155 MiB 00:08:06.498 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:06.498 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:06.498 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:06.498 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:06.498 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:06.498 element at address: 0x20001ae00000 with size: 0.572266 MiB 00:08:06.498 element at address: 0x200000c00000 with size: 0.490845 MiB 00:08:06.498 element at address: 0x20000a600000 with size: 0.489807 MiB 00:08:06.498 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:06.498 element at address: 0x200003e00000 with size: 0.481201 MiB 00:08:06.498 element at address: 0x200028200000 with size: 0.396484 MiB 00:08:06.498 element at address: 0x200000800000 with size: 0.353394 MiB 00:08:06.498 list of standard malloc elements. size: 199.253052 MiB 00:08:06.498 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:06.498 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:06.498 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:06.498 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:06.498 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:06.498 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:06.498 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:06.498 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:06.498 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:06.498 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000085a780 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000085a980 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f080 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f140 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f200 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f380 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f440 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f500 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:06.498 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:08:06.498 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:06.499 element at address: 0x200028265800 with size: 0.000183 MiB 00:08:06.499 element at address: 0x2000282658c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826c4c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826c780 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826c840 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826c900 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d080 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d140 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d200 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d380 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d440 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d500 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d680 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d740 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d800 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826d980 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826da40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826db00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826de00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826df80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e040 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e100 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e280 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e340 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e400 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e580 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e640 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e700 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e880 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826e940 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f000 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f180 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f240 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f300 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f480 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f540 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f600 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f780 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f840 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f900 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:06.499 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:06.499 list of memzone associated elements. size: 607.928894 MiB 00:08:06.499 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:06.499 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:06.499 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:06.499 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:06.499 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:06.499 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59873_0 00:08:06.499 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:06.499 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59873_0 00:08:06.499 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:06.499 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59873_0 00:08:06.499 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:06.499 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:06.499 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:06.499 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:06.499 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:06.499 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59873_0 00:08:06.499 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:06.499 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59873 00:08:06.499 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:06.499 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59873 00:08:06.499 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:06.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:06.499 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:06.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:06.499 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:06.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:06.499 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:06.500 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:06.500 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:06.500 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59873 00:08:06.500 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:06.500 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59873 00:08:06.500 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:06.500 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59873 00:08:06.500 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:06.500 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59873 00:08:06.500 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:06.500 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59873 00:08:06.500 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:06.500 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59873 00:08:06.500 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:06.500 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:06.500 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:06.500 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:06.500 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:06.500 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:06.500 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:06.500 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59873 00:08:06.500 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:08:06.500 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59873 00:08:06.500 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:06.500 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:06.500 element at address: 0x200028265980 with size: 0.023743 MiB 00:08:06.500 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:06.500 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:08:06.500 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59873 00:08:06.500 element at address: 0x20002826bac0 with size: 0.002441 MiB 00:08:06.500 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:06.500 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:08:06.500 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59873 00:08:06.500 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:06.500 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59873 00:08:06.500 element at address: 0x20000085a840 with size: 0.000305 MiB 00:08:06.500 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59873 00:08:06.500 element at address: 0x20002826c580 with size: 0.000305 MiB 00:08:06.500 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:06.500 13:37:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:06.500 13:37:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59873 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 59873 ']' 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 59873 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59873 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:06.500 killing process with pid 59873 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59873' 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 59873 00:08:06.500 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 59873 00:08:07.068 00:08:07.068 real 0m1.993s 00:08:07.068 user 0m1.925s 00:08:07.068 sys 0m0.640s 00:08:07.068 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.068 13:37:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:07.068 ************************************ 00:08:07.068 END TEST dpdk_mem_utility 00:08:07.068 ************************************ 00:08:07.068 13:37:47 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:07.068 13:37:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.068 13:37:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.068 13:37:47 -- common/autotest_common.sh@10 -- # set +x 00:08:07.068 ************************************ 00:08:07.068 START TEST event 00:08:07.068 ************************************ 00:08:07.068 13:37:47 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:07.326 * Looking for test storage... 00:08:07.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.326 13:37:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.326 13:37:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.326 13:37:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.326 13:37:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.326 13:37:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.326 13:37:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.326 13:37:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.326 13:37:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.326 13:37:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.326 13:37:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.326 13:37:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.326 13:37:48 event -- scripts/common.sh@344 -- # case "$op" in 00:08:07.326 13:37:48 event -- scripts/common.sh@345 -- # : 1 00:08:07.326 13:37:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.326 13:37:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.326 13:37:48 event -- scripts/common.sh@365 -- # decimal 1 00:08:07.326 13:37:48 event -- scripts/common.sh@353 -- # local d=1 00:08:07.326 13:37:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.326 13:37:48 event -- scripts/common.sh@355 -- # echo 1 00:08:07.326 13:37:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.326 13:37:48 event -- scripts/common.sh@366 -- # decimal 2 00:08:07.326 13:37:48 event -- scripts/common.sh@353 -- # local d=2 00:08:07.326 13:37:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.326 13:37:48 event -- scripts/common.sh@355 -- # echo 2 00:08:07.326 13:37:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.326 13:37:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.326 13:37:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.326 13:37:48 event -- scripts/common.sh@368 -- # return 0 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.326 --rc genhtml_branch_coverage=1 00:08:07.326 --rc genhtml_function_coverage=1 00:08:07.326 --rc genhtml_legend=1 00:08:07.326 --rc geninfo_all_blocks=1 00:08:07.326 --rc geninfo_unexecuted_blocks=1 00:08:07.326 00:08:07.326 ' 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.326 --rc genhtml_branch_coverage=1 00:08:07.326 --rc genhtml_function_coverage=1 00:08:07.326 --rc genhtml_legend=1 00:08:07.326 --rc geninfo_all_blocks=1 00:08:07.326 --rc geninfo_unexecuted_blocks=1 00:08:07.326 00:08:07.326 ' 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.326 --rc genhtml_branch_coverage=1 00:08:07.326 --rc genhtml_function_coverage=1 00:08:07.326 --rc genhtml_legend=1 00:08:07.326 --rc geninfo_all_blocks=1 00:08:07.326 --rc geninfo_unexecuted_blocks=1 00:08:07.326 00:08:07.326 ' 00:08:07.326 13:37:48 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.326 --rc genhtml_branch_coverage=1 00:08:07.326 --rc genhtml_function_coverage=1 00:08:07.327 --rc genhtml_legend=1 00:08:07.327 --rc geninfo_all_blocks=1 00:08:07.327 --rc geninfo_unexecuted_blocks=1 00:08:07.327 00:08:07.327 ' 00:08:07.327 13:37:48 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:07.327 13:37:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:07.327 13:37:48 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:07.327 13:37:48 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:07.327 13:37:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.327 13:37:48 event -- common/autotest_common.sh@10 -- # set +x 00:08:07.327 ************************************ 00:08:07.327 START TEST event_perf 00:08:07.327 ************************************ 00:08:07.327 13:37:48 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:07.327 Running I/O for 1 seconds...[2024-11-06 13:37:48.247780] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:07.327 [2024-11-06 13:37:48.247892] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59976 ] 00:08:07.585 [2024-11-06 13:37:48.402060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.585 [2024-11-06 13:37:48.481920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.585 [2024-11-06 13:37:48.482190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.585 [2024-11-06 13:37:48.481999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.585 Running I/O for 1 seconds...[2024-11-06 13:37:48.482192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.963 00:08:08.963 lcore 0: 196772 00:08:08.963 lcore 1: 196773 00:08:08.963 lcore 2: 196771 00:08:08.963 lcore 3: 196771 00:08:08.963 done. 00:08:08.963 00:08:08.963 real 0m1.332s 00:08:08.963 user 0m4.131s 00:08:08.963 sys 0m0.077s 00:08:08.963 13:37:49 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.963 13:37:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:08.963 ************************************ 00:08:08.963 END TEST event_perf 00:08:08.963 ************************************ 00:08:08.963 13:37:49 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:08.963 13:37:49 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:08.963 13:37:49 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.963 13:37:49 event -- common/autotest_common.sh@10 -- # set +x 00:08:08.963 ************************************ 00:08:08.963 START TEST event_reactor 00:08:08.963 ************************************ 00:08:08.963 13:37:49 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:08.964 [2024-11-06 13:37:49.641496] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:08.964 [2024-11-06 13:37:49.641583] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60014 ] 00:08:08.964 [2024-11-06 13:37:49.792759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.964 [2024-11-06 13:37:49.868085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.346 test_start 00:08:10.346 oneshot 00:08:10.346 tick 100 00:08:10.346 tick 100 00:08:10.346 tick 250 00:08:10.346 tick 100 00:08:10.346 tick 100 00:08:10.346 tick 250 00:08:10.346 tick 100 00:08:10.346 tick 500 00:08:10.346 tick 100 00:08:10.346 tick 100 00:08:10.346 tick 250 00:08:10.346 tick 100 00:08:10.346 tick 100 00:08:10.346 test_end 00:08:10.346 00:08:10.346 real 0m1.314s 00:08:10.346 user 0m1.160s 00:08:10.346 sys 0m0.048s 00:08:10.346 13:37:50 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.346 13:37:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:10.346 ************************************ 00:08:10.346 END TEST event_reactor 00:08:10.346 ************************************ 00:08:10.346 13:37:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:10.346 13:37:51 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:10.346 13:37:51 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.346 13:37:51 event -- common/autotest_common.sh@10 -- # set +x 00:08:10.346 ************************************ 00:08:10.346 START TEST event_reactor_perf 00:08:10.346 ************************************ 00:08:10.346 13:37:51 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:10.346 [2024-11-06 13:37:51.040588] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:10.346 [2024-11-06 13:37:51.040803] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60050 ] 00:08:10.346 [2024-11-06 13:37:51.211124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.605 [2024-11-06 13:37:51.286889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.542 test_start 00:08:11.542 test_end 00:08:11.542 Performance: 445011 events per second 00:08:11.542 00:08:11.542 real 0m1.351s 00:08:11.542 user 0m1.168s 00:08:11.542 sys 0m0.076s 00:08:11.542 13:37:52 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.542 13:37:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:11.542 ************************************ 00:08:11.542 END TEST event_reactor_perf 00:08:11.542 ************************************ 00:08:11.542 13:37:52 event -- event/event.sh@49 -- # uname -s 00:08:11.542 13:37:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:11.542 13:37:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:11.542 13:37:52 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:11.542 13:37:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.542 13:37:52 event -- common/autotest_common.sh@10 -- # set +x 00:08:11.542 ************************************ 00:08:11.542 START TEST event_scheduler 00:08:11.542 ************************************ 00:08:11.542 13:37:52 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:11.801 * Looking for test storage... 00:08:11.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.801 13:37:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:11.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.801 --rc genhtml_branch_coverage=1 00:08:11.801 --rc genhtml_function_coverage=1 00:08:11.801 --rc genhtml_legend=1 00:08:11.801 --rc geninfo_all_blocks=1 00:08:11.801 --rc geninfo_unexecuted_blocks=1 00:08:11.801 00:08:11.801 ' 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:11.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.801 --rc genhtml_branch_coverage=1 00:08:11.801 --rc genhtml_function_coverage=1 00:08:11.801 --rc genhtml_legend=1 00:08:11.801 --rc geninfo_all_blocks=1 00:08:11.801 --rc geninfo_unexecuted_blocks=1 00:08:11.801 00:08:11.801 ' 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:11.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.801 --rc genhtml_branch_coverage=1 00:08:11.801 --rc genhtml_function_coverage=1 00:08:11.801 --rc genhtml_legend=1 00:08:11.801 --rc geninfo_all_blocks=1 00:08:11.801 --rc geninfo_unexecuted_blocks=1 00:08:11.801 00:08:11.801 ' 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:11.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.801 --rc genhtml_branch_coverage=1 00:08:11.801 --rc genhtml_function_coverage=1 00:08:11.801 --rc genhtml_legend=1 00:08:11.801 --rc geninfo_all_blocks=1 00:08:11.801 --rc geninfo_unexecuted_blocks=1 00:08:11.801 00:08:11.801 ' 00:08:11.801 13:37:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:11.801 13:37:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60114 00:08:11.801 13:37:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:11.801 13:37:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:11.801 13:37:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60114 00:08:11.801 13:37:52 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 60114 ']' 00:08:11.802 13:37:52 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.802 13:37:52 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:11.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.802 13:37:52 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.802 13:37:52 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:11.802 13:37:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:12.061 [2024-11-06 13:37:52.749769] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:12.061 [2024-11-06 13:37:52.749872] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60114 ] 00:08:12.061 [2024-11-06 13:37:52.908251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.319 [2024-11-06 13:37:52.993687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.319 [2024-11-06 13:37:52.993923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.319 [2024-11-06 13:37:52.994035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.319 [2024-11-06 13:37:52.994040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.887 13:37:53 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.887 13:37:53 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:08:12.887 13:37:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:12.887 13:37:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.887 13:37:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:12.887 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:12.887 POWER: Cannot set governor of lcore 0 to userspace 00:08:12.887 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:12.887 POWER: Cannot set governor of lcore 0 to performance 00:08:12.887 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:12.887 POWER: Cannot set governor of lcore 0 to userspace 00:08:12.887 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:12.887 POWER: Cannot set governor of lcore 0 to userspace 00:08:12.887 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:12.887 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:12.887 POWER: Unable to set Power Management Environment for lcore 0 00:08:12.887 [2024-11-06 13:37:53.750357] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:08:12.887 [2024-11-06 13:37:53.750372] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:08:12.887 [2024-11-06 13:37:53.750382] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:12.887 [2024-11-06 13:37:53.750395] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:12.887 [2024-11-06 13:37:53.750403] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:12.887 [2024-11-06 13:37:53.750410] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:12.887 13:37:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.887 13:37:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:12.887 13:37:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.887 13:37:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:13.146 [2024-11-06 13:37:53.891266] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:13.146 13:37:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.146 13:37:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:13.146 13:37:53 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:13.146 13:37:53 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.146 13:37:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:13.146 ************************************ 00:08:13.146 START TEST scheduler_create_thread 00:08:13.146 ************************************ 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.146 2 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.146 3 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.146 4 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.146 5 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.146 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.147 6 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.147 7 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.147 8 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.147 9 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.147 10 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.147 13:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.147 13:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.050 13:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.050 13:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:15.050 13:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:15.050 13:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.050 13:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.617 13:37:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.617 00:08:15.617 real 0m2.614s 00:08:15.617 user 0m0.024s 00:08:15.617 sys 0m0.012s 00:08:15.617 13:37:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.617 13:37:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.617 ************************************ 00:08:15.617 END TEST scheduler_create_thread 00:08:15.617 ************************************ 00:08:15.876 13:37:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:15.876 13:37:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60114 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 60114 ']' 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 60114 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60114 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:15.876 killing process with pid 60114 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60114' 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 60114 00:08:15.876 13:37:56 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 60114 00:08:16.134 [2024-11-06 13:37:57.002834] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:16.392 00:08:16.392 real 0m4.861s 00:08:16.392 user 0m8.915s 00:08:16.392 sys 0m0.575s 00:08:16.392 13:37:57 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.392 13:37:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:16.392 ************************************ 00:08:16.392 END TEST event_scheduler 00:08:16.392 ************************************ 00:08:16.651 13:37:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:16.651 13:37:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:16.651 13:37:57 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:16.651 13:37:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.651 13:37:57 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.651 ************************************ 00:08:16.651 START TEST app_repeat 00:08:16.651 ************************************ 00:08:16.651 13:37:57 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60237 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.651 Process app_repeat pid: 60237 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60237' 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:16.651 spdk_app_start Round 0 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:16.651 13:37:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60237 /var/tmp/spdk-nbd.sock 00:08:16.651 13:37:57 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60237 ']' 00:08:16.651 13:37:57 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:16.651 13:37:57 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:16.651 13:37:57 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:16.651 13:37:57 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.651 13:37:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:16.651 [2024-11-06 13:37:57.422615] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:16.652 [2024-11-06 13:37:57.422717] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:08:16.652 [2024-11-06 13:37:57.579714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.910 [2024-11-06 13:37:57.661732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.910 [2024-11-06 13:37:57.661730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.476 13:37:58 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:17.476 13:37:58 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:17.476 13:37:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:17.734 Malloc0 00:08:17.734 13:37:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:17.993 Malloc1 00:08:17.993 13:37:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:17.993 13:37:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:18.282 /dev/nbd0 00:08:18.282 13:37:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:18.282 13:37:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.282 1+0 records in 00:08:18.282 1+0 records out 00:08:18.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514769 s, 8.0 MB/s 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:18.282 13:37:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:18.282 13:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.282 13:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.282 13:37:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:18.558 /dev/nbd1 00:08:18.558 13:37:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:18.558 13:37:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.558 1+0 records in 00:08:18.558 1+0 records out 00:08:18.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334998 s, 12.2 MB/s 00:08:18.558 13:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.817 13:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:18.817 13:37:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.817 13:37:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:18.817 13:37:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:18.817 13:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.817 13:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.817 13:37:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.817 13:37:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.817 13:37:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:19.076 { 00:08:19.076 "bdev_name": "Malloc0", 00:08:19.076 "nbd_device": "/dev/nbd0" 00:08:19.076 }, 00:08:19.076 { 00:08:19.076 "bdev_name": "Malloc1", 00:08:19.076 "nbd_device": "/dev/nbd1" 00:08:19.076 } 00:08:19.076 ]' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:19.076 { 00:08:19.076 "bdev_name": "Malloc0", 00:08:19.076 "nbd_device": "/dev/nbd0" 00:08:19.076 }, 00:08:19.076 { 00:08:19.076 "bdev_name": "Malloc1", 00:08:19.076 "nbd_device": "/dev/nbd1" 00:08:19.076 } 00:08:19.076 ]' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:19.076 /dev/nbd1' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:19.076 /dev/nbd1' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:19.076 256+0 records in 00:08:19.076 256+0 records out 00:08:19.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656688 s, 160 MB/s 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:19.076 256+0 records in 00:08:19.076 256+0 records out 00:08:19.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285098 s, 36.8 MB/s 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:19.076 256+0 records in 00:08:19.076 256+0 records out 00:08:19.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285361 s, 36.7 MB/s 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.076 13:37:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.334 13:38:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.592 13:38:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:19.850 13:38:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:19.850 13:38:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:20.109 13:38:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:20.367 [2024-11-06 13:38:01.264922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.626 [2024-11-06 13:38:01.342673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.626 [2024-11-06 13:38:01.342675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.626 [2024-11-06 13:38:01.419420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:20.626 [2024-11-06 13:38:01.419488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:23.157 13:38:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:23.157 spdk_app_start Round 1 00:08:23.157 13:38:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:23.157 13:38:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60237 /var/tmp/spdk-nbd.sock 00:08:23.157 13:38:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60237 ']' 00:08:23.157 13:38:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.157 13:38:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:23.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.158 13:38:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.158 13:38:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:23.158 13:38:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:23.416 13:38:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.416 13:38:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:23.416 13:38:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:23.674 Malloc0 00:08:23.674 13:38:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:23.933 Malloc1 00:08:23.933 13:38:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.933 13:38:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:24.192 /dev/nbd0 00:08:24.450 13:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:24.450 13:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.450 1+0 records in 00:08:24.450 1+0 records out 00:08:24.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036368 s, 11.3 MB/s 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:24.450 13:38:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:24.450 13:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.450 13:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.451 13:38:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:24.709 /dev/nbd1 00:08:24.709 13:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:24.709 13:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.709 1+0 records in 00:08:24.709 1+0 records out 00:08:24.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402311 s, 10.2 MB/s 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:24.709 13:38:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:24.709 13:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.709 13:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.710 13:38:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:24.710 13:38:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.710 13:38:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.967 13:38:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:24.967 { 00:08:24.967 "bdev_name": "Malloc0", 00:08:24.967 "nbd_device": "/dev/nbd0" 00:08:24.967 }, 00:08:24.967 { 00:08:24.967 "bdev_name": "Malloc1", 00:08:24.967 "nbd_device": "/dev/nbd1" 00:08:24.967 } 00:08:24.967 ]' 00:08:24.967 13:38:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:24.967 { 00:08:24.967 "bdev_name": "Malloc0", 00:08:24.967 "nbd_device": "/dev/nbd0" 00:08:24.967 }, 00:08:24.967 { 00:08:24.967 "bdev_name": "Malloc1", 00:08:24.967 "nbd_device": "/dev/nbd1" 00:08:24.967 } 00:08:24.967 ]' 00:08:24.967 13:38:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.226 /dev/nbd1' 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.226 /dev/nbd1' 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.226 256+0 records in 00:08:25.226 256+0 records out 00:08:25.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128409 s, 81.7 MB/s 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.226 256+0 records in 00:08:25.226 256+0 records out 00:08:25.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026766 s, 39.2 MB/s 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.226 13:38:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.226 256+0 records in 00:08:25.226 256+0 records out 00:08:25.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301353 s, 34.8 MB/s 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.226 13:38:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.485 13:38:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.744 13:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.002 13:38:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.003 13:38:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.003 13:38:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.003 13:38:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:26.261 13:38:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:26.520 [2024-11-06 13:38:07.351678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:26.520 [2024-11-06 13:38:07.419728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.520 [2024-11-06 13:38:07.419729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.778 [2024-11-06 13:38:07.498730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:26.778 [2024-11-06 13:38:07.498794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:29.312 spdk_app_start Round 2 00:08:29.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:29.312 13:38:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:29.312 13:38:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:29.312 13:38:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60237 /var/tmp/spdk-nbd.sock 00:08:29.312 13:38:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60237 ']' 00:08:29.312 13:38:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:29.312 13:38:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.312 13:38:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:29.312 13:38:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.312 13:38:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:29.570 13:38:10 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.570 13:38:10 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:29.570 13:38:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:29.829 Malloc0 00:08:29.829 13:38:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.088 Malloc1 00:08:30.088 13:38:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.088 13:38:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:30.347 /dev/nbd0 00:08:30.347 13:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:30.348 13:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:30.348 1+0 records in 00:08:30.348 1+0 records out 00:08:30.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276869 s, 14.8 MB/s 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:30.348 13:38:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:30.348 13:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.348 13:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.348 13:38:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:30.607 /dev/nbd1 00:08:30.607 13:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:30.607 13:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:30.607 13:38:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:30.607 13:38:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:30.607 13:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:30.607 13:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:30.607 13:38:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:30.867 1+0 records in 00:08:30.867 1+0 records out 00:08:30.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447104 s, 9.2 MB/s 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:30.867 13:38:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:30.867 13:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.867 13:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.867 13:38:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.867 13:38:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.867 13:38:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.126 { 00:08:31.126 "bdev_name": "Malloc0", 00:08:31.126 "nbd_device": "/dev/nbd0" 00:08:31.126 }, 00:08:31.126 { 00:08:31.126 "bdev_name": "Malloc1", 00:08:31.126 "nbd_device": "/dev/nbd1" 00:08:31.126 } 00:08:31.126 ]' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.126 { 00:08:31.126 "bdev_name": "Malloc0", 00:08:31.126 "nbd_device": "/dev/nbd0" 00:08:31.126 }, 00:08:31.126 { 00:08:31.126 "bdev_name": "Malloc1", 00:08:31.126 "nbd_device": "/dev/nbd1" 00:08:31.126 } 00:08:31.126 ]' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:31.126 /dev/nbd1' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:31.126 /dev/nbd1' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:31.126 256+0 records in 00:08:31.126 256+0 records out 00:08:31.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129754 s, 80.8 MB/s 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:31.126 256+0 records in 00:08:31.126 256+0 records out 00:08:31.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217291 s, 48.3 MB/s 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:31.126 256+0 records in 00:08:31.126 256+0 records out 00:08:31.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288017 s, 36.4 MB/s 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.126 13:38:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:31.126 13:38:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.126 13:38:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:31.126 13:38:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.126 13:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.126 13:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.126 13:38:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:31.126 13:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.126 13:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.385 13:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.953 13:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:32.212 13:38:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:32.212 13:38:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:32.471 13:38:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:32.730 [2024-11-06 13:38:13.456231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.730 [2024-11-06 13:38:13.536362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.730 [2024-11-06 13:38:13.536363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.730 [2024-11-06 13:38:13.614211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:32.730 [2024-11-06 13:38:13.614334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:36.023 13:38:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60237 /var/tmp/spdk-nbd.sock 00:08:36.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60237 ']' 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:36.023 13:38:16 event.app_repeat -- event/event.sh@39 -- # killprocess 60237 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 60237 ']' 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 60237 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60237 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:36.023 killing process with pid 60237 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60237' 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@971 -- # kill 60237 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@976 -- # wait 60237 00:08:36.023 spdk_app_start is called in Round 0. 00:08:36.023 Shutdown signal received, stop current app iteration 00:08:36.023 Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 reinitialization... 00:08:36.023 spdk_app_start is called in Round 1. 00:08:36.023 Shutdown signal received, stop current app iteration 00:08:36.023 Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 reinitialization... 00:08:36.023 spdk_app_start is called in Round 2. 00:08:36.023 Shutdown signal received, stop current app iteration 00:08:36.023 Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 reinitialization... 00:08:36.023 spdk_app_start is called in Round 3. 00:08:36.023 Shutdown signal received, stop current app iteration 00:08:36.023 13:38:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:36.023 13:38:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:36.023 00:08:36.023 real 0m19.397s 00:08:36.023 user 0m42.498s 00:08:36.023 sys 0m4.157s 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.023 ************************************ 00:08:36.023 END TEST app_repeat 00:08:36.023 13:38:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:36.023 ************************************ 00:08:36.023 13:38:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:36.023 13:38:16 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:36.023 13:38:16 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:36.023 13:38:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.023 13:38:16 event -- common/autotest_common.sh@10 -- # set +x 00:08:36.023 ************************************ 00:08:36.023 START TEST cpu_locks 00:08:36.023 ************************************ 00:08:36.023 13:38:16 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:36.283 * Looking for test storage... 00:08:36.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:36.283 13:38:16 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:36.283 13:38:16 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:08:36.283 13:38:16 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.283 13:38:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:36.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.283 --rc genhtml_branch_coverage=1 00:08:36.283 --rc genhtml_function_coverage=1 00:08:36.283 --rc genhtml_legend=1 00:08:36.283 --rc geninfo_all_blocks=1 00:08:36.283 --rc geninfo_unexecuted_blocks=1 00:08:36.283 00:08:36.283 ' 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:36.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.283 --rc genhtml_branch_coverage=1 00:08:36.283 --rc genhtml_function_coverage=1 00:08:36.283 --rc genhtml_legend=1 00:08:36.283 --rc geninfo_all_blocks=1 00:08:36.283 --rc geninfo_unexecuted_blocks=1 00:08:36.283 00:08:36.283 ' 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:36.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.283 --rc genhtml_branch_coverage=1 00:08:36.283 --rc genhtml_function_coverage=1 00:08:36.283 --rc genhtml_legend=1 00:08:36.283 --rc geninfo_all_blocks=1 00:08:36.283 --rc geninfo_unexecuted_blocks=1 00:08:36.283 00:08:36.283 ' 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:36.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.283 --rc genhtml_branch_coverage=1 00:08:36.283 --rc genhtml_function_coverage=1 00:08:36.283 --rc genhtml_legend=1 00:08:36.283 --rc geninfo_all_blocks=1 00:08:36.283 --rc geninfo_unexecuted_blocks=1 00:08:36.283 00:08:36.283 ' 00:08:36.283 13:38:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:36.283 13:38:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:36.283 13:38:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:36.283 13:38:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.283 13:38:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:36.283 ************************************ 00:08:36.283 START TEST default_locks 00:08:36.283 ************************************ 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60871 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60871 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 60871 ']' 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.283 13:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:36.283 [2024-11-06 13:38:17.163034] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:36.284 [2024-11-06 13:38:17.163130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60871 ] 00:08:36.542 [2024-11-06 13:38:17.313119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.542 [2024-11-06 13:38:17.389085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.476 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.476 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:08:37.476 13:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60871 00:08:37.476 13:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60871 00:08:37.476 13:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60871 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 60871 ']' 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 60871 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60871 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:37.734 killing process with pid 60871 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60871' 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 60871 00:08:37.734 13:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 60871 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60871 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60871 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60871 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 60871 ']' 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:38.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 ERROR: process (pid: 60871) is no longer running 00:08:38.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60871) - No such process 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:38.302 00:08:38.302 real 0m2.097s 00:08:38.302 user 0m2.074s 00:08:38.302 sys 0m0.727s 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.302 13:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 ************************************ 00:08:38.302 END TEST default_locks 00:08:38.302 ************************************ 00:08:38.561 13:38:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:38.561 13:38:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.561 13:38:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.561 13:38:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.561 ************************************ 00:08:38.561 START TEST default_locks_via_rpc 00:08:38.561 ************************************ 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60935 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60935 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60935 ']' 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:38.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:38.561 13:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.561 [2024-11-06 13:38:19.336151] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:38.561 [2024-11-06 13:38:19.336240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60935 ] 00:08:38.561 [2024-11-06 13:38:19.471371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.823 [2024-11-06 13:38:19.548362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60935 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60935 00:08:39.388 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:39.955 13:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60935 00:08:39.955 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 60935 ']' 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 60935 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60935 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:39.956 killing process with pid 60935 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60935' 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 60935 00:08:39.956 13:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 60935 00:08:40.523 00:08:40.523 real 0m2.093s 00:08:40.523 user 0m2.098s 00:08:40.523 sys 0m0.721s 00:08:40.523 13:38:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.523 13:38:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.523 ************************************ 00:08:40.523 END TEST default_locks_via_rpc 00:08:40.523 ************************************ 00:08:40.523 13:38:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:40.523 13:38:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:40.523 13:38:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.523 13:38:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.523 ************************************ 00:08:40.523 START TEST non_locking_app_on_locked_coremask 00:08:40.523 ************************************ 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61004 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61004 /var/tmp/spdk.sock 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61004 ']' 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.523 13:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.827 [2024-11-06 13:38:21.500441] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:40.827 [2024-11-06 13:38:21.500539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61004 ] 00:08:40.827 [2024-11-06 13:38:21.654930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.827 [2024-11-06 13:38:21.735787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61032 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61032 /var/tmp/spdk2.sock 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61032 ']' 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.762 13:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:41.762 [2024-11-06 13:38:22.502122] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:41.762 [2024-11-06 13:38:22.502229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61032 ] 00:08:41.762 [2024-11-06 13:38:22.654613] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:41.762 [2024-11-06 13:38:22.654669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.053 [2024-11-06 13:38:22.813926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.617 13:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.617 13:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:42.617 13:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61004 00:08:42.617 13:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61004 00:08:42.617 13:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61004 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61004 ']' 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61004 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61004 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61004' 00:08:43.552 killing process with pid 61004 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61004 00:08:43.552 13:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61004 00:08:44.975 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61032 00:08:44.975 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61032 ']' 00:08:44.975 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61032 00:08:44.975 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:44.975 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.975 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61032 00:08:44.975 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:44.976 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:44.976 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61032' 00:08:44.976 killing process with pid 61032 00:08:44.976 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61032 00:08:44.976 13:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61032 00:08:45.235 00:08:45.235 real 0m4.618s 00:08:45.235 user 0m4.866s 00:08:45.235 sys 0m1.397s 00:08:45.235 13:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.235 13:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:45.235 ************************************ 00:08:45.235 END TEST non_locking_app_on_locked_coremask 00:08:45.235 ************************************ 00:08:45.235 13:38:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:45.235 13:38:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:45.235 13:38:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.235 13:38:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:45.235 ************************************ 00:08:45.235 START TEST locking_app_on_unlocked_coremask 00:08:45.235 ************************************ 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61123 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61123 /var/tmp/spdk.sock 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61123 ']' 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:45.235 13:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:45.494 [2024-11-06 13:38:26.193178] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:45.494 [2024-11-06 13:38:26.193280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61123 ] 00:08:45.494 [2024-11-06 13:38:26.332742] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:45.494 [2024-11-06 13:38:26.332799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.494 [2024-11-06 13:38:26.412021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61151 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61151 /var/tmp/spdk2.sock 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61151 ']' 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:46.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.431 13:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:46.431 [2024-11-06 13:38:27.199802] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:46.431 [2024-11-06 13:38:27.199909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61151 ] 00:08:46.431 [2024-11-06 13:38:27.351799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.690 [2024-11-06 13:38:27.520237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.623 13:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:47.623 13:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:47.623 13:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61151 00:08:47.623 13:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61151 00:08:47.623 13:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:48.190 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61123 00:08:48.190 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61123 ']' 00:08:48.190 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 61123 00:08:48.190 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:48.190 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.190 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61123 00:08:48.448 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:48.448 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:48.448 killing process with pid 61123 00:08:48.448 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61123' 00:08:48.448 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 61123 00:08:48.448 13:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 61123 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61151 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61151 ']' 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 61151 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61151 00:08:49.409 killing process with pid 61151 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61151' 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 61151 00:08:49.409 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 61151 00:08:49.976 00:08:49.976 real 0m4.640s 00:08:49.976 user 0m4.845s 00:08:49.976 sys 0m1.422s 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.976 ************************************ 00:08:49.976 END TEST locking_app_on_unlocked_coremask 00:08:49.976 ************************************ 00:08:49.976 13:38:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:49.976 13:38:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:49.976 13:38:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.976 13:38:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.976 ************************************ 00:08:49.976 START TEST locking_app_on_locked_coremask 00:08:49.976 ************************************ 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61230 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61230 /var/tmp/spdk.sock 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61230 ']' 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:49.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:49.976 13:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.234 [2024-11-06 13:38:30.912454] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:50.234 [2024-11-06 13:38:30.912579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61230 ] 00:08:50.234 [2024-11-06 13:38:31.074368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.234 [2024-11-06 13:38:31.149651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61259 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61259 /var/tmp/spdk2.sock 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61259 /var/tmp/spdk2.sock 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:51.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61259 /var/tmp/spdk2.sock 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61259 ']' 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:51.171 13:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.171 [2024-11-06 13:38:31.886663] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:51.171 [2024-11-06 13:38:31.886769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61259 ] 00:08:51.171 [2024-11-06 13:38:32.043965] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61230 has claimed it. 00:08:51.171 [2024-11-06 13:38:32.044027] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:51.742 ERROR: process (pid: 61259) is no longer running 00:08:51.742 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (61259) - No such process 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61230 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61230 00:08:51.742 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:52.310 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61230 00:08:52.310 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61230 ']' 00:08:52.310 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61230 00:08:52.310 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:52.310 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.310 13:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61230 00:08:52.310 killing process with pid 61230 00:08:52.310 13:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.310 13:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.310 13:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61230' 00:08:52.310 13:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61230 00:08:52.310 13:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61230 00:08:52.878 ************************************ 00:08:52.878 END TEST locking_app_on_locked_coremask 00:08:52.878 ************************************ 00:08:52.878 00:08:52.878 real 0m2.727s 00:08:52.878 user 0m2.930s 00:08:52.878 sys 0m0.828s 00:08:52.878 13:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:52.878 13:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.878 13:38:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:52.878 13:38:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:52.878 13:38:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:52.878 13:38:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:52.878 ************************************ 00:08:52.878 START TEST locking_overlapped_coremask 00:08:52.878 ************************************ 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61316 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61316 /var/tmp/spdk.sock 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 61316 ']' 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.878 13:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.878 [2024-11-06 13:38:33.697501] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:52.878 [2024-11-06 13:38:33.697582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61316 ] 00:08:53.138 [2024-11-06 13:38:33.851976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.138 [2024-11-06 13:38:33.931584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.138 [2024-11-06 13:38:33.931761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.138 [2024-11-06 13:38:33.931759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61346 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61346 /var/tmp/spdk2.sock 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61346 /var/tmp/spdk2.sock 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:53.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61346 /var/tmp/spdk2.sock 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 61346 ']' 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.704 13:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.963 [2024-11-06 13:38:34.660932] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:53.963 [2024-11-06 13:38:34.661025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61346 ] 00:08:53.963 [2024-11-06 13:38:34.818246] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61316 has claimed it. 00:08:53.963 [2024-11-06 13:38:34.818317] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:54.532 ERROR: process (pid: 61346) is no longer running 00:08:54.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (61346) - No such process 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61316 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 61316 ']' 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 61316 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61316 00:08:54.532 killing process with pid 61316 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61316' 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 61316 00:08:54.532 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 61316 00:08:55.100 00:08:55.100 real 0m2.315s 00:08:55.100 user 0m6.277s 00:08:55.100 sys 0m0.601s 00:08:55.100 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.100 ************************************ 00:08:55.100 END TEST locking_overlapped_coremask 00:08:55.100 ************************************ 00:08:55.100 13:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:55.100 13:38:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:55.100 13:38:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:55.100 13:38:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.100 13:38:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.100 ************************************ 00:08:55.100 START TEST locking_overlapped_coremask_via_rpc 00:08:55.100 ************************************ 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61393 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61393 /var/tmp/spdk.sock 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61393 ']' 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.100 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.360 [2024-11-06 13:38:36.088440] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:55.360 [2024-11-06 13:38:36.088532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61393 ] 00:08:55.360 [2024-11-06 13:38:36.243533] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:55.360 [2024-11-06 13:38:36.243593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:55.618 [2024-11-06 13:38:36.322949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.618 [2024-11-06 13:38:36.323094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.618 [2024-11-06 13:38:36.323092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61423 00:08:56.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61423 /var/tmp/spdk2.sock 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61423 ']' 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.185 13:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.185 [2024-11-06 13:38:37.044949] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:56.185 [2024-11-06 13:38:37.045245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61423 ] 00:08:56.446 [2024-11-06 13:38:37.197186] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:56.446 [2024-11-06 13:38:37.197229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.446 [2024-11-06 13:38:37.358654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.446 [2024-11-06 13:38:37.361979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.446 [2024-11-06 13:38:37.361983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.387 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.387 [2024-11-06 13:38:38.073982] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61393 has claimed it. 00:08:57.387 2024/11/06 13:38:38 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:08:57.387 request: 00:08:57.387 { 00:08:57.387 "method": "framework_enable_cpumask_locks", 00:08:57.387 "params": {} 00:08:57.387 } 00:08:57.387 Got JSON-RPC error response 00:08:57.387 GoRPCClient: error on JSON-RPC call 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61393 /var/tmp/spdk.sock 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61393 ']' 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.388 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.647 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.647 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:57.647 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61423 /var/tmp/spdk2.sock 00:08:57.647 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61423 ']' 00:08:57.647 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:57.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:57.647 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.647 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:57.648 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.648 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.906 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.906 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:57.906 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:57.906 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:57.906 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:57.907 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:57.907 00:08:57.907 real 0m2.562s 00:08:57.907 user 0m1.203s 00:08:57.907 sys 0m0.268s 00:08:57.907 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:57.907 13:38:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.907 ************************************ 00:08:57.907 END TEST locking_overlapped_coremask_via_rpc 00:08:57.907 ************************************ 00:08:57.907 13:38:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:57.907 13:38:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61393 ]] 00:08:57.907 13:38:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61393 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61393 ']' 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61393 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61393 00:08:57.907 killing process with pid 61393 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61393' 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 61393 00:08:57.907 13:38:38 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 61393 00:08:58.475 13:38:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61423 ]] 00:08:58.475 13:38:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61423 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61423 ']' 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61423 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61423 00:08:58.475 killing process with pid 61423 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61423' 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 61423 00:08:58.475 13:38:39 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 61423 00:08:59.042 13:38:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:59.042 13:38:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:59.042 13:38:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61393 ]] 00:08:59.042 13:38:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61393 00:08:59.042 13:38:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61393 ']' 00:08:59.042 13:38:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61393 00:08:59.042 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (61393) - No such process 00:08:59.042 Process with pid 61393 is not found 00:08:59.042 13:38:39 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 61393 is not found' 00:08:59.042 Process with pid 61423 is not found 00:08:59.042 13:38:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61423 ]] 00:08:59.042 13:38:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61423 00:08:59.042 13:38:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61423 ']' 00:08:59.042 13:38:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61423 00:08:59.042 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (61423) - No such process 00:08:59.043 13:38:39 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 61423 is not found' 00:08:59.043 13:38:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:59.043 00:08:59.043 real 0m23.010s 00:08:59.043 user 0m38.095s 00:08:59.043 sys 0m7.249s 00:08:59.043 13:38:39 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.043 13:38:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:59.043 ************************************ 00:08:59.043 END TEST cpu_locks 00:08:59.043 ************************************ 00:08:59.043 ************************************ 00:08:59.043 END TEST event 00:08:59.043 ************************************ 00:08:59.043 00:08:59.043 real 0m51.946s 00:08:59.043 user 1m36.252s 00:08:59.043 sys 0m12.587s 00:08:59.043 13:38:39 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.043 13:38:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:59.302 13:38:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:59.302 13:38:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:59.302 13:38:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.302 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:08:59.302 ************************************ 00:08:59.302 START TEST thread 00:08:59.302 ************************************ 00:08:59.302 13:38:39 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:59.302 * Looking for test storage... 00:08:59.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:59.302 13:38:40 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:59.302 13:38:40 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:59.302 13:38:40 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:59.302 13:38:40 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:59.302 13:38:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.302 13:38:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.302 13:38:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.302 13:38:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.302 13:38:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.302 13:38:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.302 13:38:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.302 13:38:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.302 13:38:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.302 13:38:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.302 13:38:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.302 13:38:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:59.302 13:38:40 thread -- scripts/common.sh@345 -- # : 1 00:08:59.302 13:38:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.302 13:38:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.302 13:38:40 thread -- scripts/common.sh@365 -- # decimal 1 00:08:59.302 13:38:40 thread -- scripts/common.sh@353 -- # local d=1 00:08:59.302 13:38:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.302 13:38:40 thread -- scripts/common.sh@355 -- # echo 1 00:08:59.561 13:38:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.561 13:38:40 thread -- scripts/common.sh@366 -- # decimal 2 00:08:59.561 13:38:40 thread -- scripts/common.sh@353 -- # local d=2 00:08:59.561 13:38:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.561 13:38:40 thread -- scripts/common.sh@355 -- # echo 2 00:08:59.561 13:38:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.561 13:38:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.561 13:38:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.561 13:38:40 thread -- scripts/common.sh@368 -- # return 0 00:08:59.561 13:38:40 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.561 13:38:40 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:59.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.561 --rc genhtml_branch_coverage=1 00:08:59.561 --rc genhtml_function_coverage=1 00:08:59.561 --rc genhtml_legend=1 00:08:59.561 --rc geninfo_all_blocks=1 00:08:59.561 --rc geninfo_unexecuted_blocks=1 00:08:59.561 00:08:59.561 ' 00:08:59.561 13:38:40 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:59.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.561 --rc genhtml_branch_coverage=1 00:08:59.561 --rc genhtml_function_coverage=1 00:08:59.561 --rc genhtml_legend=1 00:08:59.561 --rc geninfo_all_blocks=1 00:08:59.561 --rc geninfo_unexecuted_blocks=1 00:08:59.561 00:08:59.561 ' 00:08:59.561 13:38:40 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:59.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.561 --rc genhtml_branch_coverage=1 00:08:59.561 --rc genhtml_function_coverage=1 00:08:59.561 --rc genhtml_legend=1 00:08:59.561 --rc geninfo_all_blocks=1 00:08:59.561 --rc geninfo_unexecuted_blocks=1 00:08:59.561 00:08:59.561 ' 00:08:59.561 13:38:40 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:59.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.561 --rc genhtml_branch_coverage=1 00:08:59.561 --rc genhtml_function_coverage=1 00:08:59.561 --rc genhtml_legend=1 00:08:59.561 --rc geninfo_all_blocks=1 00:08:59.561 --rc geninfo_unexecuted_blocks=1 00:08:59.561 00:08:59.561 ' 00:08:59.561 13:38:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:59.561 13:38:40 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:59.561 13:38:40 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.561 13:38:40 thread -- common/autotest_common.sh@10 -- # set +x 00:08:59.561 ************************************ 00:08:59.561 START TEST thread_poller_perf 00:08:59.561 ************************************ 00:08:59.561 13:38:40 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:59.561 [2024-11-06 13:38:40.295334] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:08:59.562 [2024-11-06 13:38:40.295661] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61589 ] 00:08:59.562 [2024-11-06 13:38:40.451670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.821 [2024-11-06 13:38:40.531727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.821 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:00.758 [2024-11-06T13:38:41.691Z] ====================================== 00:09:00.758 [2024-11-06T13:38:41.691Z] busy:2501521640 (cyc) 00:09:00.758 [2024-11-06T13:38:41.691Z] total_run_count: 373000 00:09:00.758 [2024-11-06T13:38:41.691Z] tsc_hz: 2490000000 (cyc) 00:09:00.758 [2024-11-06T13:38:41.691Z] ====================================== 00:09:00.758 [2024-11-06T13:38:41.691Z] poller_cost: 6706 (cyc), 2693 (nsec) 00:09:00.758 00:09:00.758 real 0m1.342s 00:09:00.758 user 0m1.170s 00:09:00.758 sys 0m0.064s 00:09:00.758 13:38:41 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.758 13:38:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:00.758 ************************************ 00:09:00.758 END TEST thread_poller_perf 00:09:00.758 ************************************ 00:09:00.758 13:38:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:00.758 13:38:41 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:09:00.758 13:38:41 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.758 13:38:41 thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.758 ************************************ 00:09:00.758 START TEST thread_poller_perf 00:09:00.758 ************************************ 00:09:00.758 13:38:41 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:01.018 [2024-11-06 13:38:41.710430] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:09:01.018 [2024-11-06 13:38:41.710564] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61623 ] 00:09:01.018 [2024-11-06 13:38:41.867663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.018 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:01.018 [2024-11-06 13:38:41.949022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.458 [2024-11-06T13:38:43.391Z] ====================================== 00:09:02.458 [2024-11-06T13:38:43.391Z] busy:2492193154 (cyc) 00:09:02.458 [2024-11-06T13:38:43.391Z] total_run_count: 5099000 00:09:02.458 [2024-11-06T13:38:43.391Z] tsc_hz: 2490000000 (cyc) 00:09:02.458 [2024-11-06T13:38:43.391Z] ====================================== 00:09:02.458 [2024-11-06T13:38:43.391Z] poller_cost: 488 (cyc), 195 (nsec) 00:09:02.458 00:09:02.458 real 0m1.332s 00:09:02.458 user 0m1.167s 00:09:02.458 sys 0m0.058s 00:09:02.458 ************************************ 00:09:02.458 END TEST thread_poller_perf 00:09:02.458 ************************************ 00:09:02.458 13:38:43 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.458 13:38:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:02.458 13:38:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:02.458 00:09:02.458 real 0m3.078s 00:09:02.458 user 0m2.515s 00:09:02.458 sys 0m0.352s 00:09:02.458 ************************************ 00:09:02.458 END TEST thread 00:09:02.458 ************************************ 00:09:02.458 13:38:43 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.458 13:38:43 thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.458 13:38:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:02.458 13:38:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:02.458 13:38:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:02.458 13:38:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.458 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:09:02.458 ************************************ 00:09:02.458 START TEST app_cmdline 00:09:02.458 ************************************ 00:09:02.458 13:38:43 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:02.458 * Looking for test storage... 00:09:02.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:02.458 13:38:43 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:02.458 13:38:43 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.459 13:38:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:02.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.459 --rc genhtml_branch_coverage=1 00:09:02.459 --rc genhtml_function_coverage=1 00:09:02.459 --rc genhtml_legend=1 00:09:02.459 --rc geninfo_all_blocks=1 00:09:02.459 --rc geninfo_unexecuted_blocks=1 00:09:02.459 00:09:02.459 ' 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:02.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.459 --rc genhtml_branch_coverage=1 00:09:02.459 --rc genhtml_function_coverage=1 00:09:02.459 --rc genhtml_legend=1 00:09:02.459 --rc geninfo_all_blocks=1 00:09:02.459 --rc geninfo_unexecuted_blocks=1 00:09:02.459 00:09:02.459 ' 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:02.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.459 --rc genhtml_branch_coverage=1 00:09:02.459 --rc genhtml_function_coverage=1 00:09:02.459 --rc genhtml_legend=1 00:09:02.459 --rc geninfo_all_blocks=1 00:09:02.459 --rc geninfo_unexecuted_blocks=1 00:09:02.459 00:09:02.459 ' 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:02.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.459 --rc genhtml_branch_coverage=1 00:09:02.459 --rc genhtml_function_coverage=1 00:09:02.459 --rc genhtml_legend=1 00:09:02.459 --rc geninfo_all_blocks=1 00:09:02.459 --rc geninfo_unexecuted_blocks=1 00:09:02.459 00:09:02.459 ' 00:09:02.459 13:38:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:02.459 13:38:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61709 00:09:02.459 13:38:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:02.459 13:38:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61709 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 61709 ']' 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:02.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:02.459 13:38:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:02.717 [2024-11-06 13:38:43.443045] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:09:02.717 [2024-11-06 13:38:43.443138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61709 ] 00:09:02.717 [2024-11-06 13:38:43.594083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.975 [2024-11-06 13:38:43.669366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.542 13:38:44 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:03.542 13:38:44 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:09:03.542 13:38:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:03.801 { 00:09:03.801 "fields": { 00:09:03.801 "commit": "079966333", 00:09:03.801 "major": 25, 00:09:03.801 "minor": 1, 00:09:03.801 "patch": 0, 00:09:03.801 "suffix": "-pre" 00:09:03.801 }, 00:09:03.801 "version": "SPDK v25.01-pre git sha1 079966333" 00:09:03.801 } 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:03.801 13:38:44 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.801 13:38:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:03.801 13:38:44 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:03.801 13:38:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:03.801 13:38:44 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:03.801 13:38:44 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:03.801 13:38:44 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.802 13:38:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.802 13:38:44 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.802 13:38:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.802 13:38:44 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.802 13:38:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.802 13:38:44 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.802 13:38:44 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:03.802 13:38:44 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:04.060 2024/11/06 13:38:44 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:09:04.060 request: 00:09:04.060 { 00:09:04.060 "method": "env_dpdk_get_mem_stats", 00:09:04.060 "params": {} 00:09:04.060 } 00:09:04.060 Got JSON-RPC error response 00:09:04.060 GoRPCClient: error on JSON-RPC call 00:09:04.060 13:38:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:04.060 13:38:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:04.060 13:38:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:04.060 13:38:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:04.060 13:38:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61709 00:09:04.060 13:38:44 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 61709 ']' 00:09:04.060 13:38:44 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 61709 00:09:04.061 13:38:44 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:09:04.061 13:38:44 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:04.061 13:38:44 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61709 00:09:04.061 13:38:44 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:04.061 13:38:44 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:04.061 killing process with pid 61709 00:09:04.061 13:38:44 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61709' 00:09:04.061 13:38:44 app_cmdline -- common/autotest_common.sh@971 -- # kill 61709 00:09:04.061 13:38:44 app_cmdline -- common/autotest_common.sh@976 -- # wait 61709 00:09:04.628 00:09:04.628 real 0m2.256s 00:09:04.628 user 0m2.453s 00:09:04.628 sys 0m0.716s 00:09:04.628 13:38:45 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.628 ************************************ 00:09:04.628 END TEST app_cmdline 00:09:04.628 13:38:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:04.628 ************************************ 00:09:04.628 13:38:45 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:04.628 13:38:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:04.628 13:38:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.628 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:09:04.628 ************************************ 00:09:04.628 START TEST version 00:09:04.628 ************************************ 00:09:04.628 13:38:45 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:04.887 * Looking for test storage... 00:09:04.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:04.887 13:38:45 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:04.887 13:38:45 version -- common/autotest_common.sh@1691 -- # lcov --version 00:09:04.887 13:38:45 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:04.887 13:38:45 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:04.887 13:38:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.887 13:38:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.887 13:38:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.887 13:38:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.887 13:38:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.887 13:38:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.887 13:38:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.888 13:38:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.888 13:38:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.888 13:38:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.888 13:38:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.888 13:38:45 version -- scripts/common.sh@344 -- # case "$op" in 00:09:04.888 13:38:45 version -- scripts/common.sh@345 -- # : 1 00:09:04.888 13:38:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.888 13:38:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.888 13:38:45 version -- scripts/common.sh@365 -- # decimal 1 00:09:04.888 13:38:45 version -- scripts/common.sh@353 -- # local d=1 00:09:04.888 13:38:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.888 13:38:45 version -- scripts/common.sh@355 -- # echo 1 00:09:04.888 13:38:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.888 13:38:45 version -- scripts/common.sh@366 -- # decimal 2 00:09:04.888 13:38:45 version -- scripts/common.sh@353 -- # local d=2 00:09:04.888 13:38:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.888 13:38:45 version -- scripts/common.sh@355 -- # echo 2 00:09:04.888 13:38:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.888 13:38:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.888 13:38:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.888 13:38:45 version -- scripts/common.sh@368 -- # return 0 00:09:04.888 13:38:45 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.888 13:38:45 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:04.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.888 --rc genhtml_branch_coverage=1 00:09:04.888 --rc genhtml_function_coverage=1 00:09:04.888 --rc genhtml_legend=1 00:09:04.888 --rc geninfo_all_blocks=1 00:09:04.888 --rc geninfo_unexecuted_blocks=1 00:09:04.888 00:09:04.888 ' 00:09:04.888 13:38:45 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:04.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.888 --rc genhtml_branch_coverage=1 00:09:04.888 --rc genhtml_function_coverage=1 00:09:04.888 --rc genhtml_legend=1 00:09:04.888 --rc geninfo_all_blocks=1 00:09:04.888 --rc geninfo_unexecuted_blocks=1 00:09:04.888 00:09:04.888 ' 00:09:04.888 13:38:45 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:04.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.888 --rc genhtml_branch_coverage=1 00:09:04.888 --rc genhtml_function_coverage=1 00:09:04.888 --rc genhtml_legend=1 00:09:04.888 --rc geninfo_all_blocks=1 00:09:04.888 --rc geninfo_unexecuted_blocks=1 00:09:04.888 00:09:04.888 ' 00:09:04.888 13:38:45 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:04.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.888 --rc genhtml_branch_coverage=1 00:09:04.888 --rc genhtml_function_coverage=1 00:09:04.888 --rc genhtml_legend=1 00:09:04.888 --rc geninfo_all_blocks=1 00:09:04.888 --rc geninfo_unexecuted_blocks=1 00:09:04.888 00:09:04.888 ' 00:09:04.888 13:38:45 version -- app/version.sh@17 -- # get_header_version major 00:09:04.888 13:38:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:04.888 13:38:45 version -- app/version.sh@14 -- # cut -f2 00:09:04.888 13:38:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:04.888 13:38:45 version -- app/version.sh@17 -- # major=25 00:09:04.888 13:38:45 version -- app/version.sh@18 -- # get_header_version minor 00:09:04.888 13:38:45 version -- app/version.sh@14 -- # cut -f2 00:09:04.888 13:38:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:04.888 13:38:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:04.888 13:38:45 version -- app/version.sh@18 -- # minor=1 00:09:04.888 13:38:45 version -- app/version.sh@19 -- # get_header_version patch 00:09:04.888 13:38:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:04.888 13:38:45 version -- app/version.sh@14 -- # cut -f2 00:09:04.888 13:38:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:04.888 13:38:45 version -- app/version.sh@19 -- # patch=0 00:09:04.888 13:38:45 version -- app/version.sh@20 -- # get_header_version suffix 00:09:04.888 13:38:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:04.888 13:38:45 version -- app/version.sh@14 -- # cut -f2 00:09:04.888 13:38:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:04.888 13:38:45 version -- app/version.sh@20 -- # suffix=-pre 00:09:04.888 13:38:45 version -- app/version.sh@22 -- # version=25.1 00:09:04.888 13:38:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:04.888 13:38:45 version -- app/version.sh@28 -- # version=25.1rc0 00:09:04.888 13:38:45 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:04.888 13:38:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:04.888 13:38:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:04.888 13:38:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:04.888 00:09:04.888 real 0m0.319s 00:09:04.888 user 0m0.198s 00:09:04.888 sys 0m0.174s 00:09:04.888 13:38:45 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.888 13:38:45 version -- common/autotest_common.sh@10 -- # set +x 00:09:04.888 ************************************ 00:09:04.888 END TEST version 00:09:04.888 ************************************ 00:09:05.147 13:38:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:05.147 13:38:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:05.147 13:38:45 -- spdk/autotest.sh@194 -- # uname -s 00:09:05.147 13:38:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:05.147 13:38:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:05.147 13:38:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:05.147 13:38:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:05.147 13:38:45 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:05.147 13:38:45 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:05.147 13:38:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.147 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:09:05.147 13:38:45 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:05.147 13:38:45 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:05.147 13:38:45 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:05.147 13:38:45 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:05.147 13:38:45 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:05.147 13:38:45 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:05.147 13:38:45 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:05.147 13:38:45 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:05.147 13:38:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.147 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:09:05.147 ************************************ 00:09:05.147 START TEST nvmf_tcp 00:09:05.147 ************************************ 00:09:05.147 13:38:45 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:05.147 * Looking for test storage... 00:09:05.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:05.147 13:38:46 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.147 13:38:46 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.147 13:38:46 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.406 13:38:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.406 --rc genhtml_branch_coverage=1 00:09:05.406 --rc genhtml_function_coverage=1 00:09:05.406 --rc genhtml_legend=1 00:09:05.406 --rc geninfo_all_blocks=1 00:09:05.406 --rc geninfo_unexecuted_blocks=1 00:09:05.406 00:09:05.406 ' 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.406 --rc genhtml_branch_coverage=1 00:09:05.406 --rc genhtml_function_coverage=1 00:09:05.406 --rc genhtml_legend=1 00:09:05.406 --rc geninfo_all_blocks=1 00:09:05.406 --rc geninfo_unexecuted_blocks=1 00:09:05.406 00:09:05.406 ' 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.406 --rc genhtml_branch_coverage=1 00:09:05.406 --rc genhtml_function_coverage=1 00:09:05.406 --rc genhtml_legend=1 00:09:05.406 --rc geninfo_all_blocks=1 00:09:05.406 --rc geninfo_unexecuted_blocks=1 00:09:05.406 00:09:05.406 ' 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.406 --rc genhtml_branch_coverage=1 00:09:05.406 --rc genhtml_function_coverage=1 00:09:05.406 --rc genhtml_legend=1 00:09:05.406 --rc geninfo_all_blocks=1 00:09:05.406 --rc geninfo_unexecuted_blocks=1 00:09:05.406 00:09:05.406 ' 00:09:05.406 13:38:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:05.406 13:38:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:05.406 13:38:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.406 13:38:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.406 ************************************ 00:09:05.406 START TEST nvmf_target_core 00:09:05.406 ************************************ 00:09:05.406 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:05.406 * Looking for test storage... 00:09:05.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:05.406 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.406 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.406 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.665 --rc genhtml_branch_coverage=1 00:09:05.665 --rc genhtml_function_coverage=1 00:09:05.665 --rc genhtml_legend=1 00:09:05.665 --rc geninfo_all_blocks=1 00:09:05.665 --rc geninfo_unexecuted_blocks=1 00:09:05.665 00:09:05.665 ' 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.665 --rc genhtml_branch_coverage=1 00:09:05.665 --rc genhtml_function_coverage=1 00:09:05.665 --rc genhtml_legend=1 00:09:05.665 --rc geninfo_all_blocks=1 00:09:05.665 --rc geninfo_unexecuted_blocks=1 00:09:05.665 00:09:05.665 ' 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.665 --rc genhtml_branch_coverage=1 00:09:05.665 --rc genhtml_function_coverage=1 00:09:05.665 --rc genhtml_legend=1 00:09:05.665 --rc geninfo_all_blocks=1 00:09:05.665 --rc geninfo_unexecuted_blocks=1 00:09:05.665 00:09:05.665 ' 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.665 --rc genhtml_branch_coverage=1 00:09:05.665 --rc genhtml_function_coverage=1 00:09:05.665 --rc genhtml_legend=1 00:09:05.665 --rc geninfo_all_blocks=1 00:09:05.665 --rc geninfo_unexecuted_blocks=1 00:09:05.665 00:09:05.665 ' 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.665 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.666 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.666 ************************************ 00:09:05.666 START TEST nvmf_abort 00:09:05.666 ************************************ 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:05.666 * Looking for test storage... 00:09:05.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.666 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.925 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.925 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.925 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.925 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.926 --rc genhtml_branch_coverage=1 00:09:05.926 --rc genhtml_function_coverage=1 00:09:05.926 --rc genhtml_legend=1 00:09:05.926 --rc geninfo_all_blocks=1 00:09:05.926 --rc geninfo_unexecuted_blocks=1 00:09:05.926 00:09:05.926 ' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.926 --rc genhtml_branch_coverage=1 00:09:05.926 --rc genhtml_function_coverage=1 00:09:05.926 --rc genhtml_legend=1 00:09:05.926 --rc geninfo_all_blocks=1 00:09:05.926 --rc geninfo_unexecuted_blocks=1 00:09:05.926 00:09:05.926 ' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.926 --rc genhtml_branch_coverage=1 00:09:05.926 --rc genhtml_function_coverage=1 00:09:05.926 --rc genhtml_legend=1 00:09:05.926 --rc geninfo_all_blocks=1 00:09:05.926 --rc geninfo_unexecuted_blocks=1 00:09:05.926 00:09:05.926 ' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.926 --rc genhtml_branch_coverage=1 00:09:05.926 --rc genhtml_function_coverage=1 00:09:05.926 --rc genhtml_legend=1 00:09:05.926 --rc geninfo_all_blocks=1 00:09:05.926 --rc geninfo_unexecuted_blocks=1 00:09:05.926 00:09:05.926 ' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.926 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.926 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:05.927 Cannot find device "nvmf_init_br" 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:05.927 Cannot find device "nvmf_init_br2" 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:05.927 Cannot find device "nvmf_tgt_br" 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.927 Cannot find device "nvmf_tgt_br2" 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:05.927 Cannot find device "nvmf_init_br" 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:05.927 Cannot find device "nvmf_init_br2" 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:09:05.927 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:06.186 Cannot find device "nvmf_tgt_br" 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:06.186 Cannot find device "nvmf_tgt_br2" 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:06.186 Cannot find device "nvmf_br" 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:06.186 Cannot find device "nvmf_init_if" 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:06.186 Cannot find device "nvmf_init_if2" 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.186 13:38:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:06.186 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.186 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.186 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:06.186 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:06.485 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:06.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:06.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:09:06.486 00:09:06.486 --- 10.0.0.3 ping statistics --- 00:09:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.486 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:06.486 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:06.486 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:09:06.486 00:09:06.486 --- 10.0.0.4 ping statistics --- 00:09:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.486 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:06.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:06.486 00:09:06.486 --- 10.0.0.1 ping statistics --- 00:09:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.486 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:06.486 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:06.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:09:06.744 00:09:06.744 --- 10.0.0.2 ping statistics --- 00:09:06.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.744 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62142 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62142 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 62142 ']' 00:09:06.744 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.745 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:06.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.745 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.745 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:06.745 13:38:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.745 [2024-11-06 13:38:47.490800] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:09:06.745 [2024-11-06 13:38:47.490910] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.745 [2024-11-06 13:38:47.628353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:07.002 [2024-11-06 13:38:47.713308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.002 [2024-11-06 13:38:47.713367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.002 [2024-11-06 13:38:47.713378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.002 [2024-11-06 13:38:47.713386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.002 [2024-11-06 13:38:47.713393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.002 [2024-11-06 13:38:47.714698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.002 [2024-11-06 13:38:47.714822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.002 [2024-11-06 13:38:47.714826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:07.568 [2024-11-06 13:38:48.480225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.568 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:07.827 Malloc0 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:07.827 Delay0 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:07.827 [2024-11-06 13:38:48.583632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.827 13:38:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:08.085 [2024-11-06 13:38:48.791207] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:09.983 Initializing NVMe Controllers 00:09:09.983 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:09.983 controller IO queue size 128 less than required 00:09:09.983 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:09.983 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:09.983 Initialization complete. Launching workers. 00:09:09.983 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35898 00:09:09.983 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35959, failed to submit 62 00:09:09.983 success 35902, unsuccessful 57, failed 0 00:09:09.983 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:09.983 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.983 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.983 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:09.983 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:09.983 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.983 13:38:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:10.242 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.242 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:10.242 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.242 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.242 rmmod nvme_tcp 00:09:10.242 rmmod nvme_fabrics 00:09:10.242 rmmod nvme_keyring 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62142 ']' 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62142 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 62142 ']' 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 62142 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62142 00:09:10.502 killing process with pid 62142 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62142' 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 62142 00:09:10.502 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 62142 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:10.761 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:11.020 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:09:11.021 00:09:11.021 real 0m5.392s 00:09:11.021 user 0m13.338s 00:09:11.021 sys 0m1.515s 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.021 ************************************ 00:09:11.021 END TEST nvmf_abort 00:09:11.021 ************************************ 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.021 ************************************ 00:09:11.021 START TEST nvmf_ns_hotplug_stress 00:09:11.021 ************************************ 00:09:11.021 13:38:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:11.281 * Looking for test storage... 00:09:11.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.281 --rc genhtml_branch_coverage=1 00:09:11.281 --rc genhtml_function_coverage=1 00:09:11.281 --rc genhtml_legend=1 00:09:11.281 --rc geninfo_all_blocks=1 00:09:11.281 --rc geninfo_unexecuted_blocks=1 00:09:11.281 00:09:11.281 ' 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.281 --rc genhtml_branch_coverage=1 00:09:11.281 --rc genhtml_function_coverage=1 00:09:11.281 --rc genhtml_legend=1 00:09:11.281 --rc geninfo_all_blocks=1 00:09:11.281 --rc geninfo_unexecuted_blocks=1 00:09:11.281 00:09:11.281 ' 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.281 --rc genhtml_branch_coverage=1 00:09:11.281 --rc genhtml_function_coverage=1 00:09:11.281 --rc genhtml_legend=1 00:09:11.281 --rc geninfo_all_blocks=1 00:09:11.281 --rc geninfo_unexecuted_blocks=1 00:09:11.281 00:09:11.281 ' 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:11.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.281 --rc genhtml_branch_coverage=1 00:09:11.281 --rc genhtml_function_coverage=1 00:09:11.281 --rc genhtml_legend=1 00:09:11.281 --rc geninfo_all_blocks=1 00:09:11.281 --rc geninfo_unexecuted_blocks=1 00:09:11.281 00:09:11.281 ' 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.281 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.282 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.282 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:11.542 Cannot find device "nvmf_init_br" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:11.542 Cannot find device "nvmf_init_br2" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:11.542 Cannot find device "nvmf_tgt_br" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.542 Cannot find device "nvmf_tgt_br2" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:11.542 Cannot find device "nvmf_init_br" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:11.542 Cannot find device "nvmf_init_br2" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:11.542 Cannot find device "nvmf_tgt_br" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:11.542 Cannot find device "nvmf_tgt_br2" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:11.542 Cannot find device "nvmf_br" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:11.542 Cannot find device "nvmf_init_if" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:11.542 Cannot find device "nvmf_init_if2" 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.542 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:11.800 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:11.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:11.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.148 ms 00:09:11.801 00:09:11.801 --- 10.0.0.3 ping statistics --- 00:09:11.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.801 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:11.801 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:11.801 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.119 ms 00:09:11.801 00:09:11.801 --- 10.0.0.4 ping statistics --- 00:09:11.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.801 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:11.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:11.801 00:09:11.801 --- 10.0.0.1 ping statistics --- 00:09:11.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.801 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:11.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:11.801 00:09:11.801 --- 10.0.0.2 ping statistics --- 00:09:11.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.801 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:11.801 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62476 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62476 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 62476 ']' 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:12.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:12.059 13:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.059 [2024-11-06 13:38:52.779370] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:09:12.059 [2024-11-06 13:38:52.779451] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.059 [2024-11-06 13:38:52.932213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.318 [2024-11-06 13:38:53.009672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.318 [2024-11-06 13:38:53.009731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.318 [2024-11-06 13:38:53.009741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.318 [2024-11-06 13:38:53.009750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.318 [2024-11-06 13:38:53.009757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.318 [2024-11-06 13:38:53.011144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.318 [2024-11-06 13:38:53.012001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.318 [2024-11-06 13:38:53.012002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.927 13:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:12.927 13:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:09:12.927 13:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.927 13:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:12.927 13:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.927 13:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.927 13:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:12.927 13:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:13.186 [2024-11-06 13:38:54.005442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.186 13:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:13.446 13:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:13.705 [2024-11-06 13:38:54.495145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:13.705 13:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:13.965 13:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:14.224 Malloc0 00:09:14.224 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:14.484 Delay0 00:09:14.484 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.743 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:15.013 NULL1 00:09:15.013 13:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:15.305 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:15.305 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62609 00:09:15.305 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:15.305 13:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.686 Read completed with error (sct=0, sc=11) 00:09:16.686 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.686 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:16.686 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:16.945 true 00:09:16.945 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:16.945 13:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.883 13:38:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.142 13:38:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:18.142 13:38:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:18.401 true 00:09:18.401 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:18.401 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.659 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.917 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:18.917 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:19.176 true 00:09:19.176 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:19.176 13:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.434 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.692 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:19.692 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:19.692 true 00:09:19.951 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:19.951 13:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.914 13:39:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.172 13:39:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:21.172 13:39:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:21.430 true 00:09:21.430 13:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:21.430 13:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.688 13:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.688 13:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:21.688 13:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:21.946 true 00:09:21.946 13:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:21.946 13:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.881 13:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.139 13:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:23.139 13:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:23.397 true 00:09:23.397 13:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:23.397 13:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.655 13:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.913 13:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:23.913 13:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:24.171 true 00:09:24.171 13:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:24.171 13:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.431 13:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.711 13:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:24.711 13:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:24.970 true 00:09:24.970 13:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:24.970 13:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.907 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.166 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:26.166 13:39:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:26.424 true 00:09:26.424 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:26.425 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.698 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.956 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:26.956 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:27.214 true 00:09:27.214 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:27.214 13:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.780 13:39:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.037 13:39:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:28.037 13:39:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:28.294 true 00:09:28.294 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:28.294 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.553 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.823 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:28.823 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:29.082 true 00:09:29.082 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:29.082 13:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.341 13:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.601 13:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:29.601 13:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:29.860 true 00:09:29.860 13:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:29.860 13:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.797 13:39:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.056 13:39:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:31.056 13:39:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:31.315 true 00:09:31.315 13:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:31.315 13:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.253 13:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.512 13:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:32.512 13:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:32.771 true 00:09:32.771 13:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:32.771 13:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.050 13:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.050 13:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:33.050 13:39:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:33.308 true 00:09:33.308 13:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:33.308 13:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.567 13:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.825 13:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:33.825 13:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:34.083 true 00:09:34.083 13:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:34.083 13:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.461 13:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.461 13:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:35.461 13:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:35.720 true 00:09:35.720 13:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:35.720 13:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.655 13:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.655 13:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:36.655 13:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:36.914 true 00:09:36.914 13:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:36.914 13:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.172 13:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.452 13:39:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:37.452 13:39:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:37.712 true 00:09:37.712 13:39:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:37.712 13:39:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.652 13:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.652 13:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:38.652 13:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:38.909 true 00:09:38.909 13:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:38.909 13:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.171 13:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.429 13:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:39.429 13:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:39.688 true 00:09:39.688 13:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:39.688 13:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.625 13:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.625 13:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:40.625 13:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:40.884 true 00:09:40.884 13:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:40.884 13:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.143 13:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.402 13:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:41.403 13:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:41.662 true 00:09:41.662 13:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:41.662 13:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.921 13:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.180 13:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:42.180 13:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:42.439 true 00:09:42.439 13:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:42.439 13:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.376 13:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.635 13:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:43.635 13:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:44.203 true 00:09:44.203 13:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:44.203 13:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.772 13:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.031 13:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:45.031 13:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:45.290 true 00:09:45.290 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:45.290 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.549 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.549 Initializing NVMe Controllers 00:09:45.549 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.549 Controller IO queue size 128, less than required. 00:09:45.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:45.549 Controller IO queue size 128, less than required. 00:09:45.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:45.549 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:45.549 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:45.549 Initialization complete. Launching workers. 00:09:45.549 ======================================================== 00:09:45.549 Latency(us) 00:09:45.549 Device Information : IOPS MiB/s Average min max 00:09:45.549 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 957.12 0.47 70131.07 2991.09 1141968.16 00:09:45.549 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11605.68 5.67 11029.67 2871.44 483283.08 00:09:45.549 ======================================================== 00:09:45.549 Total : 12562.79 6.13 15532.41 2871.44 1141968.16 00:09:45.549 00:09:45.808 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:45.808 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:45.808 true 00:09:45.808 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62609 00:09:45.808 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62609) - No such process 00:09:45.808 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62609 00:09:45.808 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.066 13:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.325 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:46.325 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:46.326 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:46.326 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.326 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:46.585 null0 00:09:46.585 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:46.585 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.585 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:46.844 null1 00:09:46.844 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:46.844 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.844 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:46.844 null2 00:09:46.844 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:46.844 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.844 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:47.104 null3 00:09:47.104 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.104 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.104 13:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:47.406 null4 00:09:47.406 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.406 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.406 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:47.664 null5 00:09:47.664 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.665 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.665 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:47.924 null6 00:09:47.924 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.924 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.924 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:47.924 null7 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.183 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63643 63645 63647 63648 63650 63652 63654 63656 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.184 13:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.444 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:48.703 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:48.704 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.961 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.219 13:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:49.219 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.219 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.219 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:49.219 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.219 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.219 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:49.219 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:49.219 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.478 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:49.737 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.996 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.254 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.254 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.254 13:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:50.254 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:50.513 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.772 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:51.031 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:51.290 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.291 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.291 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:51.291 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.291 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.291 13:39:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:51.291 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:51.549 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:51.549 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:51.549 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.550 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:51.809 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.068 13:39:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:52.326 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:52.326 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.326 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.326 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:52.326 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.326 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:52.327 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:52.621 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.880 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.138 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.138 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.138 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.138 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.139 13:39:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.139 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.139 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.139 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.397 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.657 rmmod nvme_tcp 00:09:53.657 rmmod nvme_fabrics 00:09:53.657 rmmod nvme_keyring 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62476 ']' 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62476 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 62476 ']' 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 62476 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62476 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:53.657 killing process with pid 62476 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62476' 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 62476 00:09:53.657 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 62476 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.916 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:54.176 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:54.176 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:54.176 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:54.176 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:54.176 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:54.176 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:54.176 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.176 13:39:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.176 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:54.176 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.176 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.176 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.176 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:09:54.176 00:09:54.176 real 0m43.145s 00:09:54.176 user 3m20.047s 00:09:54.176 sys 0m17.600s 00:09:54.176 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.176 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:54.176 ************************************ 00:09:54.176 END TEST nvmf_ns_hotplug_stress 00:09:54.176 ************************************ 00:09:54.435 13:39:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.436 ************************************ 00:09:54.436 START TEST nvmf_delete_subsystem 00:09:54.436 ************************************ 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:54.436 * Looking for test storage... 00:09:54.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.436 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:54.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.716 --rc genhtml_branch_coverage=1 00:09:54.716 --rc genhtml_function_coverage=1 00:09:54.716 --rc genhtml_legend=1 00:09:54.716 --rc geninfo_all_blocks=1 00:09:54.716 --rc geninfo_unexecuted_blocks=1 00:09:54.716 00:09:54.716 ' 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:54.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.716 --rc genhtml_branch_coverage=1 00:09:54.716 --rc genhtml_function_coverage=1 00:09:54.716 --rc genhtml_legend=1 00:09:54.716 --rc geninfo_all_blocks=1 00:09:54.716 --rc geninfo_unexecuted_blocks=1 00:09:54.716 00:09:54.716 ' 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:54.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.716 --rc genhtml_branch_coverage=1 00:09:54.716 --rc genhtml_function_coverage=1 00:09:54.716 --rc genhtml_legend=1 00:09:54.716 --rc geninfo_all_blocks=1 00:09:54.716 --rc geninfo_unexecuted_blocks=1 00:09:54.716 00:09:54.716 ' 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:54.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.716 --rc genhtml_branch_coverage=1 00:09:54.716 --rc genhtml_function_coverage=1 00:09:54.716 --rc genhtml_legend=1 00:09:54.716 --rc geninfo_all_blocks=1 00:09:54.716 --rc geninfo_unexecuted_blocks=1 00:09:54.716 00:09:54.716 ' 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.716 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.717 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:54.717 Cannot find device "nvmf_init_br" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:54.717 Cannot find device "nvmf_init_br2" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:54.717 Cannot find device "nvmf_tgt_br" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.717 Cannot find device "nvmf_tgt_br2" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:54.717 Cannot find device "nvmf_init_br" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:54.717 Cannot find device "nvmf_init_br2" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:54.717 Cannot find device "nvmf_tgt_br" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:54.717 Cannot find device "nvmf_tgt_br2" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:54.717 Cannot find device "nvmf_br" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:54.717 Cannot find device "nvmf_init_if" 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:09:54.717 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:54.976 Cannot find device "nvmf_init_if2" 00:09:54.976 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:54.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.376 ms 00:09:54.977 00:09:54.977 --- 10.0.0.3 ping statistics --- 00:09:54.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.977 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:54.977 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:54.977 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:09:54.977 00:09:54.977 --- 10.0.0.4 ping statistics --- 00:09:54.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.977 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:54.977 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:55.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:09:55.236 00:09:55.236 --- 10.0.0.1 ping statistics --- 00:09:55.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.236 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:55.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:55.236 00:09:55.236 --- 10.0.0.2 ping statistics --- 00:09:55.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.236 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=65076 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 65076 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 65076 ']' 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:55.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:55.236 13:39:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.236 [2024-11-06 13:39:36.003425] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:09:55.236 [2024-11-06 13:39:36.003755] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.236 [2024-11-06 13:39:36.159858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.495 [2024-11-06 13:39:36.238740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.495 [2024-11-06 13:39:36.238982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.495 [2024-11-06 13:39:36.239077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.495 [2024-11-06 13:39:36.239387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.495 [2024-11-06 13:39:36.239416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.495 [2024-11-06 13:39:36.240764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.495 [2024-11-06 13:39:36.240764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.063 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:56.063 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:09:56.063 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.063 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.063 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.063 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.063 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.063 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.322 13:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.322 [2024-11-06 13:39:37.003136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.322 [2024-11-06 13:39:37.031589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.322 NULL1 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.322 Delay0 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.322 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.323 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.323 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.323 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.323 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65127 00:09:56.323 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:56.323 13:39:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:56.581 [2024-11-06 13:39:37.267706] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:58.484 13:39:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.484 13:39:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.484 13:39:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.484 Read completed with error (sct=0, sc=8) 00:09:58.484 starting I/O failed: -6 00:09:58.484 Read completed with error (sct=0, sc=8) 00:09:58.484 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 [2024-11-06 13:39:39.302574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7e0 is same with the state(6) to be set 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 starting I/O failed: -6 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 [2024-11-06 13:39:39.305401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdd40000c40 is same with the state(6) to be set 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Write completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.485 Read completed with error (sct=0, sc=8) 00:09:58.486 Write completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Write completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Write completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Write completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Write completed with error (sct=0, sc=8) 00:09:58.486 Read completed with error (sct=0, sc=8) 00:09:58.486 Write completed with error (sct=0, sc=8) 00:09:59.421 [2024-11-06 13:39:40.281706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87ee0 is same with the state(6) to be set 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 [2024-11-06 13:39:40.301076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdd4000d800 is same with the state(6) to be set 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 [2024-11-06 13:39:40.302341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ba50 is same with the state(6) to be set 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 [2024-11-06 13:39:40.303787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8eea0 is same with the state(6) to be set 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Write completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 Read completed with error (sct=0, sc=8) 00:09:59.421 [2024-11-06 13:39:40.304211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdd4000d020 is same with the state(6) to be set 00:09:59.421 Initializing NVMe Controllers 00:09:59.421 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.421 Controller IO queue size 128, less than required. 00:09:59.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:59.421 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:59.421 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:59.421 Initialization complete. Launching workers. 00:09:59.421 ======================================================== 00:09:59.421 Latency(us) 00:09:59.421 Device Information : IOPS MiB/s Average min max 00:09:59.421 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.14 0.08 902638.87 519.89 1015424.54 00:09:59.421 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.21 0.08 950887.64 365.10 1999315.08 00:09:59.421 ======================================================== 00:09:59.421 Total : 326.35 0.16 926176.64 365.10 1999315.08 00:09:59.421 00:09:59.421 [2024-11-06 13:39:40.305883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa87ee0 (9): Bad file descriptor 00:09:59.421 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:59.421 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.421 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:59.421 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65127 00:09:59.421 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65127 00:09:59.989 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65127) - No such process 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65127 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 65127 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 65127 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.989 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 [2024-11-06 13:39:40.837630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65167 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65167 00:09:59.990 13:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:00.248 [2024-11-06 13:39:41.043855] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:00.508 13:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:00.508 13:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65167 00:10:00.508 13:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:01.076 13:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:01.076 13:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65167 00:10:01.076 13:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:01.645 13:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:01.645 13:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65167 00:10:01.645 13:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:02.214 13:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:02.214 13:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65167 00:10:02.214 13:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:02.473 13:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:02.473 13:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65167 00:10:02.473 13:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.041 13:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.041 13:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65167 00:10:03.041 13:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:03.299 Initializing NVMe Controllers 00:10:03.299 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:03.299 Controller IO queue size 128, less than required. 00:10:03.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:03.299 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:03.299 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:03.299 Initialization complete. Launching workers. 00:10:03.299 ======================================================== 00:10:03.299 Latency(us) 00:10:03.299 Device Information : IOPS MiB/s Average min max 00:10:03.299 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003649.73 1000180.56 1015822.76 00:10:03.299 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005477.73 1000205.99 1042321.70 00:10:03.299 ======================================================== 00:10:03.299 Total : 256.00 0.12 1004563.73 1000180.56 1042321.70 00:10:03.299 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65167 00:10:03.560 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65167) - No such process 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65167 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.560 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.560 rmmod nvme_tcp 00:10:03.560 rmmod nvme_fabrics 00:10:03.560 rmmod nvme_keyring 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 65076 ']' 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 65076 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 65076 ']' 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 65076 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65076 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:03.821 killing process with pid 65076 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65076' 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 65076 00:10:03.821 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 65076 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:04.080 13:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:04.080 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:10:04.339 00:10:04.339 real 0m10.002s 00:10:04.339 user 0m28.997s 00:10:04.339 sys 0m2.037s 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.339 ************************************ 00:10:04.339 END TEST nvmf_delete_subsystem 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.339 ************************************ 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.339 ************************************ 00:10:04.339 START TEST nvmf_host_management 00:10:04.339 ************************************ 00:10:04.339 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:04.599 * Looking for test storage... 00:10:04.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.599 --rc genhtml_branch_coverage=1 00:10:04.599 --rc genhtml_function_coverage=1 00:10:04.599 --rc genhtml_legend=1 00:10:04.599 --rc geninfo_all_blocks=1 00:10:04.599 --rc geninfo_unexecuted_blocks=1 00:10:04.599 00:10:04.599 ' 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.599 --rc genhtml_branch_coverage=1 00:10:04.599 --rc genhtml_function_coverage=1 00:10:04.599 --rc genhtml_legend=1 00:10:04.599 --rc geninfo_all_blocks=1 00:10:04.599 --rc geninfo_unexecuted_blocks=1 00:10:04.599 00:10:04.599 ' 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.599 --rc genhtml_branch_coverage=1 00:10:04.599 --rc genhtml_function_coverage=1 00:10:04.599 --rc genhtml_legend=1 00:10:04.599 --rc geninfo_all_blocks=1 00:10:04.599 --rc geninfo_unexecuted_blocks=1 00:10:04.599 00:10:04.599 ' 00:10:04.599 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.600 --rc genhtml_branch_coverage=1 00:10:04.600 --rc genhtml_function_coverage=1 00:10:04.600 --rc genhtml_legend=1 00:10:04.600 --rc geninfo_all_blocks=1 00:10:04.600 --rc geninfo_unexecuted_blocks=1 00:10:04.600 00:10:04.600 ' 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.600 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:04.600 Cannot find device "nvmf_init_br" 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:04.600 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:04.860 Cannot find device "nvmf_init_br2" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:04.860 Cannot find device "nvmf_tgt_br" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.860 Cannot find device "nvmf_tgt_br2" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:04.860 Cannot find device "nvmf_init_br" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:04.860 Cannot find device "nvmf_init_br2" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:04.860 Cannot find device "nvmf_tgt_br" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:04.860 Cannot find device "nvmf_tgt_br2" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:04.860 Cannot find device "nvmf_br" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:04.860 Cannot find device "nvmf_init_if" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:04.860 Cannot find device "nvmf_init_if2" 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:04.860 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:05.120 13:39:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:05.120 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:05.120 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:10:05.120 00:10:05.120 --- 10.0.0.3 ping statistics --- 00:10:05.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.120 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:05.120 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:05.120 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:10:05.120 00:10:05.120 --- 10.0.0.4 ping statistics --- 00:10:05.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.120 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:05.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:05.120 00:10:05.120 --- 10.0.0.1 ping statistics --- 00:10:05.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.120 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:05.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:05.120 00:10:05.120 --- 10.0.0.2 ping statistics --- 00:10:05.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.120 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:05.120 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65460 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65460 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 65460 ']' 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:05.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:05.380 13:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.380 [2024-11-06 13:39:46.138816] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:10:05.380 [2024-11-06 13:39:46.138920] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.380 [2024-11-06 13:39:46.278073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.639 [2024-11-06 13:39:46.355874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.639 [2024-11-06 13:39:46.355935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.639 [2024-11-06 13:39:46.355946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.639 [2024-11-06 13:39:46.355954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.639 [2024-11-06 13:39:46.355962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.639 [2024-11-06 13:39:46.357259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.639 [2024-11-06 13:39:46.357347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.639 [2024-11-06 13:39:46.357512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.639 [2024-11-06 13:39:46.357513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.207 [2024-11-06 13:39:47.127538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.207 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.466 Malloc0 00:10:06.466 [2024-11-06 13:39:47.216916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65538 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65538 /var/tmp/bdevperf.sock 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 65538 ']' 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.466 { 00:10:06.466 "params": { 00:10:06.466 "name": "Nvme$subsystem", 00:10:06.466 "trtype": "$TEST_TRANSPORT", 00:10:06.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.466 "adrfam": "ipv4", 00:10:06.466 "trsvcid": "$NVMF_PORT", 00:10:06.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.466 "hdgst": ${hdgst:-false}, 00:10:06.466 "ddgst": ${ddgst:-false} 00:10:06.466 }, 00:10:06.466 "method": "bdev_nvme_attach_controller" 00:10:06.466 } 00:10:06.466 EOF 00:10:06.466 )") 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.466 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.467 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:06.467 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:06.467 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:06.467 13:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.467 "params": { 00:10:06.467 "name": "Nvme0", 00:10:06.467 "trtype": "tcp", 00:10:06.467 "traddr": "10.0.0.3", 00:10:06.467 "adrfam": "ipv4", 00:10:06.467 "trsvcid": "4420", 00:10:06.467 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:06.467 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:06.467 "hdgst": false, 00:10:06.467 "ddgst": false 00:10:06.467 }, 00:10:06.467 "method": "bdev_nvme_attach_controller" 00:10:06.467 }' 00:10:06.467 [2024-11-06 13:39:47.350110] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:10:06.467 [2024-11-06 13:39:47.350195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65538 ] 00:10:06.725 [2024-11-06 13:39:47.503591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.725 [2024-11-06 13:39:47.583477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.983 Running I/O for 10 seconds... 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=995 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 995 -ge 100 ']' 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.553 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:07.553 [2024-11-06 13:39:48.392042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.553 [2024-11-06 13:39:48.392196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 [2024-11-06 13:39:48.392350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260d0 is same with the state(6) to be set 00:10:07.554 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.554 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:07.554 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.554 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:07.554 [2024-11-06 13:39:48.397698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.397985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.397995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.554 [2024-11-06 13:39:48.398313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.554 [2024-11-06 13:39:48.398322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.398979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.398990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.399017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.399029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:07.555 [2024-11-06 13:39:48.399038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.555 [2024-11-06 13:39:48.399077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:10:07.555 [2024-11-06 13:39:48.399255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.555 [2024-11-06 13:39:48.399268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.556 [2024-11-06 13:39:48.399280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.556 [2024-11-06 13:39:48.399290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.556 [2024-11-06 13:39:48.399301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.556 [2024-11-06 13:39:48.399310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.556 [2024-11-06 13:39:48.399320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:07.556 [2024-11-06 13:39:48.399329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.556 [2024-11-06 13:39:48.399339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52660 is same with the state(6) to be set 00:10:07.556 [2024-11-06 13:39:48.400333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:07.556 task offset: 8192 on job bdev=Nvme0n1 fails 00:10:07.556 00:10:07.556 Latency(us) 00:10:07.556 [2024-11-06T13:39:48.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.556 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:07.556 Job: Nvme0n1 ended in about 0.60 seconds with error 00:10:07.556 Verification LBA range: start 0x0 length 0x400 00:10:07.556 Nvme0n1 : 0.60 1814.97 113.44 106.76 0.00 32585.51 2329.29 28425.25 00:10:07.556 [2024-11-06T13:39:48.489Z] =================================================================================================================== 00:10:07.556 [2024-11-06T13:39:48.489Z] Total : 1814.97 113.44 106.76 0.00 32585.51 2329.29 28425.25 00:10:07.556 [2024-11-06 13:39:48.402803] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:07.556 [2024-11-06 13:39:48.402834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52660 (9): Bad file descriptor 00:10:07.556 [2024-11-06 13:39:48.407128] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:07.556 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.556 13:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65538 00:10:08.512 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65538) - No such process 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.512 { 00:10:08.512 "params": { 00:10:08.512 "name": "Nvme$subsystem", 00:10:08.512 "trtype": "$TEST_TRANSPORT", 00:10:08.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.512 "adrfam": "ipv4", 00:10:08.512 "trsvcid": "$NVMF_PORT", 00:10:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.512 "hdgst": ${hdgst:-false}, 00:10:08.512 "ddgst": ${ddgst:-false} 00:10:08.512 }, 00:10:08.512 "method": "bdev_nvme_attach_controller" 00:10:08.512 } 00:10:08.512 EOF 00:10:08.512 )") 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:08.512 13:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.512 "params": { 00:10:08.512 "name": "Nvme0", 00:10:08.512 "trtype": "tcp", 00:10:08.512 "traddr": "10.0.0.3", 00:10:08.512 "adrfam": "ipv4", 00:10:08.512 "trsvcid": "4420", 00:10:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:08.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:08.512 "hdgst": false, 00:10:08.512 "ddgst": false 00:10:08.512 }, 00:10:08.512 "method": "bdev_nvme_attach_controller" 00:10:08.512 }' 00:10:08.771 [2024-11-06 13:39:49.475189] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:10:08.771 [2024-11-06 13:39:49.475281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65588 ] 00:10:08.771 [2024-11-06 13:39:49.649217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.030 [2024-11-06 13:39:49.752525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.289 Running I/O for 1 seconds... 00:10:10.227 1856.00 IOPS, 116.00 MiB/s 00:10:10.227 Latency(us) 00:10:10.227 [2024-11-06T13:39:51.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.227 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:10.227 Verification LBA range: start 0x0 length 0x400 00:10:10.227 Nvme0n1 : 1.02 1885.30 117.83 0.00 0.00 33387.25 5369.21 39584.80 00:10:10.227 [2024-11-06T13:39:51.160Z] =================================================================================================================== 00:10:10.227 [2024-11-06T13:39:51.160Z] Total : 1885.30 117.83 0.00 0.00 33387.25 5369.21 39584.80 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.487 rmmod nvme_tcp 00:10:10.487 rmmod nvme_fabrics 00:10:10.487 rmmod nvme_keyring 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65460 ']' 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65460 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 65460 ']' 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 65460 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:10.487 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65460 00:10:10.747 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:10.747 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:10.747 killing process with pid 65460 00:10:10.747 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65460' 00:10:10.747 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 65460 00:10:10.747 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 65460 00:10:11.007 [2024-11-06 13:39:51.717666] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:11.007 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:11.267 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.267 13:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:11.267 00:10:11.267 real 0m6.836s 00:10:11.267 user 0m24.182s 00:10:11.267 sys 0m2.116s 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.267 ************************************ 00:10:11.267 END TEST nvmf_host_management 00:10:11.267 ************************************ 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.267 ************************************ 00:10:11.267 START TEST nvmf_lvol 00:10:11.267 ************************************ 00:10:11.267 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:11.527 * Looking for test storage... 00:10:11.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:11.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.528 --rc genhtml_branch_coverage=1 00:10:11.528 --rc genhtml_function_coverage=1 00:10:11.528 --rc genhtml_legend=1 00:10:11.528 --rc geninfo_all_blocks=1 00:10:11.528 --rc geninfo_unexecuted_blocks=1 00:10:11.528 00:10:11.528 ' 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:11.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.528 --rc genhtml_branch_coverage=1 00:10:11.528 --rc genhtml_function_coverage=1 00:10:11.528 --rc genhtml_legend=1 00:10:11.528 --rc geninfo_all_blocks=1 00:10:11.528 --rc geninfo_unexecuted_blocks=1 00:10:11.528 00:10:11.528 ' 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:11.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.528 --rc genhtml_branch_coverage=1 00:10:11.528 --rc genhtml_function_coverage=1 00:10:11.528 --rc genhtml_legend=1 00:10:11.528 --rc geninfo_all_blocks=1 00:10:11.528 --rc geninfo_unexecuted_blocks=1 00:10:11.528 00:10:11.528 ' 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:11.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.528 --rc genhtml_branch_coverage=1 00:10:11.528 --rc genhtml_function_coverage=1 00:10:11.528 --rc genhtml_legend=1 00:10:11.528 --rc geninfo_all_blocks=1 00:10:11.528 --rc geninfo_unexecuted_blocks=1 00:10:11.528 00:10:11.528 ' 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:11.528 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.529 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:11.529 Cannot find device "nvmf_init_br" 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:11.529 Cannot find device "nvmf_init_br2" 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:11.529 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:11.789 Cannot find device "nvmf_tgt_br" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.789 Cannot find device "nvmf_tgt_br2" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:11.789 Cannot find device "nvmf_init_br" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:11.789 Cannot find device "nvmf_init_br2" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:11.789 Cannot find device "nvmf_tgt_br" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:11.789 Cannot find device "nvmf_tgt_br2" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:11.789 Cannot find device "nvmf_br" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:11.789 Cannot find device "nvmf_init_if" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:11.789 Cannot find device "nvmf_init_if2" 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:11.789 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:12.050 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.050 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 00:10:12.050 00:10:12.050 --- 10.0.0.3 ping statistics --- 00:10:12.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.050 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:12.050 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:12.050 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:10:12.050 00:10:12.050 --- 10.0.0.4 ping statistics --- 00:10:12.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.050 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:12.050 00:10:12.050 --- 10.0.0.1 ping statistics --- 00:10:12.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.050 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:12.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:10:12.050 00:10:12.050 --- 10.0.0.2 ping statistics --- 00:10:12.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.050 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65858 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65858 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 65858 ']' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:12.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:12.050 13:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:12.310 [2024-11-06 13:39:52.991053] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:10:12.310 [2024-11-06 13:39:52.991134] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.310 [2024-11-06 13:39:53.134010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.310 [2024-11-06 13:39:53.211639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.310 [2024-11-06 13:39:53.211695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.310 [2024-11-06 13:39:53.211706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.310 [2024-11-06 13:39:53.211714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.310 [2024-11-06 13:39:53.211722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.310 [2024-11-06 13:39:53.213037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.310 [2024-11-06 13:39:53.213176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.310 [2024-11-06 13:39:53.213179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.274 13:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:13.274 13:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:10:13.274 13:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.274 13:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.274 13:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:13.274 13:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.274 13:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.274 [2024-11-06 13:39:54.190682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.535 13:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.795 13:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:13.795 13:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.053 13:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:14.053 13:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:14.312 13:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:14.571 13:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b2acb46a-92ec-47fa-be01-63561f473689 00:10:14.571 13:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b2acb46a-92ec-47fa-be01-63561f473689 lvol 20 00:10:14.830 13:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5b29fbc7-6fa8-4b06-ac7a-dab781426c97 00:10:14.830 13:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:15.089 13:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5b29fbc7-6fa8-4b06-ac7a-dab781426c97 00:10:15.090 13:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:15.349 [2024-11-06 13:39:56.171046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:15.349 13:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:15.609 13:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:15.609 13:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66006 00:10:15.609 13:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:16.547 13:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 5b29fbc7-6fa8-4b06-ac7a-dab781426c97 MY_SNAPSHOT 00:10:17.116 13:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=491c136a-e0d9-4e97-bc50-07b3e0903d9c 00:10:17.116 13:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 5b29fbc7-6fa8-4b06-ac7a-dab781426c97 30 00:10:17.116 13:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 491c136a-e0d9-4e97-bc50-07b3e0903d9c MY_CLONE 00:10:17.686 13:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5ee65224-b9e5-4567-ae81-c81cdf49510f 00:10:17.686 13:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5ee65224-b9e5-4567-ae81-c81cdf49510f 00:10:18.256 13:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66006 00:10:26.398 Initializing NVMe Controllers 00:10:26.398 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:26.398 Controller IO queue size 128, less than required. 00:10:26.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:26.398 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:26.398 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:26.398 Initialization complete. Launching workers. 00:10:26.398 ======================================================== 00:10:26.398 Latency(us) 00:10:26.398 Device Information : IOPS MiB/s Average min max 00:10:26.398 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11833.50 46.22 10822.93 1881.54 66493.73 00:10:26.398 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12061.70 47.12 10615.96 1476.54 69950.93 00:10:26.398 ======================================================== 00:10:26.398 Total : 23895.20 93.34 10718.46 1476.54 69950.93 00:10:26.398 00:10:26.398 13:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:26.398 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5b29fbc7-6fa8-4b06-ac7a-dab781426c97 00:10:26.398 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2acb46a-92ec-47fa-be01-63561f473689 00:10:26.657 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:26.657 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:26.657 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:26.657 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.657 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:26.916 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.916 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:26.916 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.916 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.916 rmmod nvme_tcp 00:10:26.916 rmmod nvme_fabrics 00:10:26.916 rmmod nvme_keyring 00:10:26.916 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65858 ']' 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65858 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 65858 ']' 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 65858 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65858 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:26.917 killing process with pid 65858 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65858' 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 65858 00:10:26.917 13:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 65858 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:27.176 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.436 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:27.695 00:10:27.695 real 0m16.304s 00:10:27.695 user 1m3.823s 00:10:27.695 sys 0m5.976s 00:10:27.695 ************************************ 00:10:27.695 END TEST nvmf_lvol 00:10:27.695 ************************************ 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.695 ************************************ 00:10:27.695 START TEST nvmf_lvs_grow 00:10:27.695 ************************************ 00:10:27.695 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:27.955 * Looking for test storage... 00:10:27.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:27.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.955 --rc genhtml_branch_coverage=1 00:10:27.955 --rc genhtml_function_coverage=1 00:10:27.955 --rc genhtml_legend=1 00:10:27.955 --rc geninfo_all_blocks=1 00:10:27.955 --rc geninfo_unexecuted_blocks=1 00:10:27.955 00:10:27.955 ' 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:27.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.955 --rc genhtml_branch_coverage=1 00:10:27.955 --rc genhtml_function_coverage=1 00:10:27.955 --rc genhtml_legend=1 00:10:27.955 --rc geninfo_all_blocks=1 00:10:27.955 --rc geninfo_unexecuted_blocks=1 00:10:27.955 00:10:27.955 ' 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:27.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.955 --rc genhtml_branch_coverage=1 00:10:27.955 --rc genhtml_function_coverage=1 00:10:27.955 --rc genhtml_legend=1 00:10:27.955 --rc geninfo_all_blocks=1 00:10:27.955 --rc geninfo_unexecuted_blocks=1 00:10:27.955 00:10:27.955 ' 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:27.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.955 --rc genhtml_branch_coverage=1 00:10:27.955 --rc genhtml_function_coverage=1 00:10:27.955 --rc genhtml_legend=1 00:10:27.955 --rc geninfo_all_blocks=1 00:10:27.955 --rc geninfo_unexecuted_blocks=1 00:10:27.955 00:10:27.955 ' 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:10:27.955 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.956 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:27.956 Cannot find device "nvmf_init_br" 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:27.956 Cannot find device "nvmf_init_br2" 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:27.956 Cannot find device "nvmf_tgt_br" 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:27.956 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.215 Cannot find device "nvmf_tgt_br2" 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:28.215 Cannot find device "nvmf_init_br" 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:28.215 Cannot find device "nvmf_init_br2" 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:28.215 Cannot find device "nvmf_tgt_br" 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:28.215 Cannot find device "nvmf_tgt_br2" 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:28.215 Cannot find device "nvmf_br" 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:28.215 13:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:28.215 Cannot find device "nvmf_init_if" 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:28.215 Cannot find device "nvmf_init_if2" 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:28.215 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:28.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:10:28.474 00:10:28.474 --- 10.0.0.3 ping statistics --- 00:10:28.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.474 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:10:28.474 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:28.474 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:28.474 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:10:28.474 00:10:28.474 --- 10.0.0.4 ping statistics --- 00:10:28.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.474 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:28.475 00:10:28.475 --- 10.0.0.1 ping statistics --- 00:10:28.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.475 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:28.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:10:28.475 00:10:28.475 --- 10.0.0.2 ping statistics --- 00:10:28.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.475 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66441 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66441 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 66441 ']' 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:28.475 13:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:28.733 [2024-11-06 13:40:09.445674] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:10:28.733 [2024-11-06 13:40:09.445771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.733 [2024-11-06 13:40:09.601206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.991 [2024-11-06 13:40:09.679840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.991 [2024-11-06 13:40:09.680108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.991 [2024-11-06 13:40:09.680127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.991 [2024-11-06 13:40:09.680136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.991 [2024-11-06 13:40:09.680145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.991 [2024-11-06 13:40:09.680521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.558 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:29.558 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:10:29.558 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.558 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.558 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:29.558 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.558 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:29.818 [2024-11-06 13:40:10.671348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:29.818 ************************************ 00:10:29.818 START TEST lvs_grow_clean 00:10:29.818 ************************************ 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:29.818 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:30.077 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:30.077 13:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:30.337 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:30.337 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:30.337 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:30.905 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:30.905 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:30.905 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 lvol 150 00:10:30.905 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=45d210dd-32a7-4daf-b23b-1fe70772ff5e 00:10:30.905 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:30.905 13:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:31.164 [2024-11-06 13:40:12.027917] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:31.164 [2024-11-06 13:40:12.028000] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:31.164 true 00:10:31.164 13:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:31.164 13:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:31.422 13:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:31.422 13:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:31.681 13:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 45d210dd-32a7-4daf-b23b-1fe70772ff5e 00:10:31.941 13:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:32.200 [2024-11-06 13:40:12.987006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:32.200 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66604 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66604 /var/tmp/bdevperf.sock 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 66604 ']' 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:32.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:32.460 13:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:32.460 [2024-11-06 13:40:13.311100] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:10:32.460 [2024-11-06 13:40:13.311229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66604 ] 00:10:32.719 [2024-11-06 13:40:13.470708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.719 [2024-11-06 13:40:13.550045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.658 13:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.658 13:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:10:33.658 13:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:33.658 Nvme0n1 00:10:33.918 13:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:33.918 [ 00:10:33.918 { 00:10:33.918 "aliases": [ 00:10:33.918 "45d210dd-32a7-4daf-b23b-1fe70772ff5e" 00:10:33.918 ], 00:10:33.918 "assigned_rate_limits": { 00:10:33.918 "r_mbytes_per_sec": 0, 00:10:33.918 "rw_ios_per_sec": 0, 00:10:33.918 "rw_mbytes_per_sec": 0, 00:10:33.918 "w_mbytes_per_sec": 0 00:10:33.918 }, 00:10:33.918 "block_size": 4096, 00:10:33.918 "claimed": false, 00:10:33.918 "driver_specific": { 00:10:33.918 "mp_policy": "active_passive", 00:10:33.918 "nvme": [ 00:10:33.918 { 00:10:33.918 "ctrlr_data": { 00:10:33.918 "ana_reporting": false, 00:10:33.918 "cntlid": 1, 00:10:33.918 "firmware_revision": "25.01", 00:10:33.918 "model_number": "SPDK bdev Controller", 00:10:33.918 "multi_ctrlr": true, 00:10:33.918 "oacs": { 00:10:33.918 "firmware": 0, 00:10:33.918 "format": 0, 00:10:33.918 "ns_manage": 0, 00:10:33.918 "security": 0 00:10:33.918 }, 00:10:33.918 "serial_number": "SPDK0", 00:10:33.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:33.918 "vendor_id": "0x8086" 00:10:33.918 }, 00:10:33.918 "ns_data": { 00:10:33.918 "can_share": true, 00:10:33.918 "id": 1 00:10:33.918 }, 00:10:33.918 "trid": { 00:10:33.918 "adrfam": "IPv4", 00:10:33.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:33.918 "traddr": "10.0.0.3", 00:10:33.918 "trsvcid": "4420", 00:10:33.918 "trtype": "TCP" 00:10:33.918 }, 00:10:33.918 "vs": { 00:10:33.918 "nvme_version": "1.3" 00:10:33.918 } 00:10:33.918 } 00:10:33.918 ] 00:10:33.918 }, 00:10:33.918 "memory_domains": [ 00:10:33.918 { 00:10:33.918 "dma_device_id": "system", 00:10:33.918 "dma_device_type": 1 00:10:33.918 } 00:10:33.918 ], 00:10:33.918 "name": "Nvme0n1", 00:10:33.918 "num_blocks": 38912, 00:10:33.918 "numa_id": -1, 00:10:33.918 "product_name": "NVMe disk", 00:10:33.918 "supported_io_types": { 00:10:33.918 "abort": true, 00:10:33.919 "compare": true, 00:10:33.919 "compare_and_write": true, 00:10:33.919 "copy": true, 00:10:33.919 "flush": true, 00:10:33.919 "get_zone_info": false, 00:10:33.919 "nvme_admin": true, 00:10:33.919 "nvme_io": true, 00:10:33.919 "nvme_io_md": false, 00:10:33.919 "nvme_iov_md": false, 00:10:33.919 "read": true, 00:10:33.919 "reset": true, 00:10:33.919 "seek_data": false, 00:10:33.919 "seek_hole": false, 00:10:33.919 "unmap": true, 00:10:33.919 "write": true, 00:10:33.919 "write_zeroes": true, 00:10:33.919 "zcopy": false, 00:10:33.919 "zone_append": false, 00:10:33.919 "zone_management": false 00:10:33.919 }, 00:10:33.919 "uuid": "45d210dd-32a7-4daf-b23b-1fe70772ff5e", 00:10:33.919 "zoned": false 00:10:33.919 } 00:10:33.919 ] 00:10:33.919 13:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66651 00:10:33.919 13:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:33.919 13:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:34.177 Running I/O for 10 seconds... 00:10:35.114 Latency(us) 00:10:35.114 [2024-11-06T13:40:16.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.114 Nvme0n1 : 1.00 9834.00 38.41 0.00 0.00 0.00 0.00 0.00 00:10:35.114 [2024-11-06T13:40:16.047Z] =================================================================================================================== 00:10:35.114 [2024-11-06T13:40:16.047Z] Total : 9834.00 38.41 0.00 0.00 0.00 0.00 0.00 00:10:35.114 00:10:36.051 13:40:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:36.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.051 Nvme0n1 : 2.00 9841.00 38.44 0.00 0.00 0.00 0.00 0.00 00:10:36.051 [2024-11-06T13:40:16.984Z] =================================================================================================================== 00:10:36.051 [2024-11-06T13:40:16.984Z] Total : 9841.00 38.44 0.00 0.00 0.00 0.00 0.00 00:10:36.051 00:10:36.325 true 00:10:36.325 13:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:36.325 13:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:36.583 13:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:36.583 13:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:36.583 13:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66651 00:10:37.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.165 Nvme0n1 : 3.00 9816.67 38.35 0.00 0.00 0.00 0.00 0.00 00:10:37.165 [2024-11-06T13:40:18.098Z] =================================================================================================================== 00:10:37.165 [2024-11-06T13:40:18.098Z] Total : 9816.67 38.35 0.00 0.00 0.00 0.00 0.00 00:10:37.165 00:10:38.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.099 Nvme0n1 : 4.00 9599.75 37.50 0.00 0.00 0.00 0.00 0.00 00:10:38.099 [2024-11-06T13:40:19.032Z] =================================================================================================================== 00:10:38.099 [2024-11-06T13:40:19.032Z] Total : 9599.75 37.50 0.00 0.00 0.00 0.00 0.00 00:10:38.099 00:10:39.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.034 Nvme0n1 : 5.00 9556.60 37.33 0.00 0.00 0.00 0.00 0.00 00:10:39.034 [2024-11-06T13:40:19.967Z] =================================================================================================================== 00:10:39.034 [2024-11-06T13:40:19.967Z] Total : 9556.60 37.33 0.00 0.00 0.00 0.00 0.00 00:10:39.034 00:10:40.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.412 Nvme0n1 : 6.00 9526.50 37.21 0.00 0.00 0.00 0.00 0.00 00:10:40.412 [2024-11-06T13:40:21.345Z] =================================================================================================================== 00:10:40.412 [2024-11-06T13:40:21.345Z] Total : 9526.50 37.21 0.00 0.00 0.00 0.00 0.00 00:10:40.412 00:10:41.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.350 Nvme0n1 : 7.00 9510.43 37.15 0.00 0.00 0.00 0.00 0.00 00:10:41.350 [2024-11-06T13:40:22.283Z] =================================================================================================================== 00:10:41.350 [2024-11-06T13:40:22.283Z] Total : 9510.43 37.15 0.00 0.00 0.00 0.00 0.00 00:10:41.350 00:10:42.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.310 Nvme0n1 : 8.00 9450.88 36.92 0.00 0.00 0.00 0.00 0.00 00:10:42.310 [2024-11-06T13:40:23.243Z] =================================================================================================================== 00:10:42.310 [2024-11-06T13:40:23.243Z] Total : 9450.88 36.92 0.00 0.00 0.00 0.00 0.00 00:10:42.310 00:10:43.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.249 Nvme0n1 : 9.00 9394.33 36.70 0.00 0.00 0.00 0.00 0.00 00:10:43.249 [2024-11-06T13:40:24.182Z] =================================================================================================================== 00:10:43.249 [2024-11-06T13:40:24.182Z] Total : 9394.33 36.70 0.00 0.00 0.00 0.00 0.00 00:10:43.249 00:10:44.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.185 Nvme0n1 : 10.00 9351.30 36.53 0.00 0.00 0.00 0.00 0.00 00:10:44.185 [2024-11-06T13:40:25.119Z] =================================================================================================================== 00:10:44.186 [2024-11-06T13:40:25.119Z] Total : 9351.30 36.53 0.00 0.00 0.00 0.00 0.00 00:10:44.186 00:10:44.186 00:10:44.186 Latency(us) 00:10:44.186 [2024-11-06T13:40:25.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.186 Nvme0n1 : 10.00 9361.14 36.57 0.00 0.00 13669.88 4842.82 107384.29 00:10:44.186 [2024-11-06T13:40:25.119Z] =================================================================================================================== 00:10:44.186 [2024-11-06T13:40:25.119Z] Total : 9361.14 36.57 0.00 0.00 13669.88 4842.82 107384.29 00:10:44.186 { 00:10:44.186 "results": [ 00:10:44.186 { 00:10:44.186 "job": "Nvme0n1", 00:10:44.186 "core_mask": "0x2", 00:10:44.186 "workload": "randwrite", 00:10:44.186 "status": "finished", 00:10:44.186 "queue_depth": 128, 00:10:44.186 "io_size": 4096, 00:10:44.186 "runtime": 10.003163, 00:10:44.186 "iops": 9361.139071711617, 00:10:44.186 "mibps": 36.566949498873505, 00:10:44.186 "io_failed": 0, 00:10:44.186 "io_timeout": 0, 00:10:44.186 "avg_latency_us": 13669.879434149278, 00:10:44.186 "min_latency_us": 4842.820883534137, 00:10:44.186 "max_latency_us": 107384.2891566265 00:10:44.186 } 00:10:44.186 ], 00:10:44.186 "core_count": 1 00:10:44.186 } 00:10:44.186 13:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66604 00:10:44.186 13:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 66604 ']' 00:10:44.186 13:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 66604 00:10:44.186 13:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:10:44.186 13:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:44.186 13:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66604 00:10:44.186 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:44.186 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:44.186 killing process with pid 66604 00:10:44.186 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66604' 00:10:44.186 Received shutdown signal, test time was about 10.000000 seconds 00:10:44.186 00:10:44.186 Latency(us) 00:10:44.186 [2024-11-06T13:40:25.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.186 [2024-11-06T13:40:25.119Z] =================================================================================================================== 00:10:44.186 [2024-11-06T13:40:25.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:44.186 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 66604 00:10:44.186 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 66604 00:10:44.445 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:44.704 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:44.963 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:44.963 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:45.223 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:45.223 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:45.223 13:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:45.482 [2024-11-06 13:40:26.201645] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:45.482 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:45.742 2024/11/06 13:40:26 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d344fccf-478c-4994-8d1c-8ee8c6cd1720], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:45.742 request: 00:10:45.742 { 00:10:45.742 "method": "bdev_lvol_get_lvstores", 00:10:45.742 "params": { 00:10:45.742 "uuid": "d344fccf-478c-4994-8d1c-8ee8c6cd1720" 00:10:45.742 } 00:10:45.742 } 00:10:45.742 Got JSON-RPC error response 00:10:45.742 GoRPCClient: error on JSON-RPC call 00:10:45.742 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:45.742 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:45.742 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:45.742 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:45.742 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:46.002 aio_bdev 00:10:46.002 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 45d210dd-32a7-4daf-b23b-1fe70772ff5e 00:10:46.002 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=45d210dd-32a7-4daf-b23b-1fe70772ff5e 00:10:46.002 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:46.002 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:10:46.002 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:46.002 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:46.002 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:46.262 13:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45d210dd-32a7-4daf-b23b-1fe70772ff5e -t 2000 00:10:46.522 [ 00:10:46.522 { 00:10:46.522 "aliases": [ 00:10:46.522 "lvs/lvol" 00:10:46.522 ], 00:10:46.522 "assigned_rate_limits": { 00:10:46.522 "r_mbytes_per_sec": 0, 00:10:46.522 "rw_ios_per_sec": 0, 00:10:46.522 "rw_mbytes_per_sec": 0, 00:10:46.522 "w_mbytes_per_sec": 0 00:10:46.522 }, 00:10:46.522 "block_size": 4096, 00:10:46.522 "claimed": false, 00:10:46.522 "driver_specific": { 00:10:46.522 "lvol": { 00:10:46.522 "base_bdev": "aio_bdev", 00:10:46.522 "clone": false, 00:10:46.522 "esnap_clone": false, 00:10:46.522 "lvol_store_uuid": "d344fccf-478c-4994-8d1c-8ee8c6cd1720", 00:10:46.522 "num_allocated_clusters": 38, 00:10:46.522 "snapshot": false, 00:10:46.522 "thin_provision": false 00:10:46.522 } 00:10:46.522 }, 00:10:46.522 "name": "45d210dd-32a7-4daf-b23b-1fe70772ff5e", 00:10:46.522 "num_blocks": 38912, 00:10:46.522 "product_name": "Logical Volume", 00:10:46.522 "supported_io_types": { 00:10:46.522 "abort": false, 00:10:46.522 "compare": false, 00:10:46.522 "compare_and_write": false, 00:10:46.522 "copy": false, 00:10:46.522 "flush": false, 00:10:46.522 "get_zone_info": false, 00:10:46.522 "nvme_admin": false, 00:10:46.522 "nvme_io": false, 00:10:46.522 "nvme_io_md": false, 00:10:46.522 "nvme_iov_md": false, 00:10:46.522 "read": true, 00:10:46.522 "reset": true, 00:10:46.522 "seek_data": true, 00:10:46.522 "seek_hole": true, 00:10:46.522 "unmap": true, 00:10:46.522 "write": true, 00:10:46.522 "write_zeroes": true, 00:10:46.522 "zcopy": false, 00:10:46.522 "zone_append": false, 00:10:46.522 "zone_management": false 00:10:46.522 }, 00:10:46.522 "uuid": "45d210dd-32a7-4daf-b23b-1fe70772ff5e", 00:10:46.522 "zoned": false 00:10:46.522 } 00:10:46.522 ] 00:10:46.522 13:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:10:46.522 13:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:46.522 13:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:46.782 13:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:46.782 13:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:46.782 13:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:47.041 13:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:47.041 13:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 45d210dd-32a7-4daf-b23b-1fe70772ff5e 00:10:47.301 13:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d344fccf-478c-4994-8d1c-8ee8c6cd1720 00:10:47.562 13:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:47.821 13:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:48.391 ************************************ 00:10:48.391 END TEST lvs_grow_clean 00:10:48.391 ************************************ 00:10:48.391 00:10:48.391 real 0m18.357s 00:10:48.391 user 0m16.550s 00:10:48.391 sys 0m3.293s 00:10:48.391 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:48.392 ************************************ 00:10:48.392 START TEST lvs_grow_dirty 00:10:48.392 ************************************ 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:48.392 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:48.651 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:48.651 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:48.908 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e6c10d8d-a55c-4e9e-a505-0df12299e406 00:10:48.908 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:10:48.908 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:49.167 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:49.167 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:49.167 13:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e6c10d8d-a55c-4e9e-a505-0df12299e406 lvol 150 00:10:49.426 13:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=18865ebf-4757-4bb6-bad1-0ce452081e52 00:10:49.426 13:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:49.426 13:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:49.685 [2024-11-06 13:40:30.484973] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:49.685 [2024-11-06 13:40:30.485061] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:49.685 true 00:10:49.685 13:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:10:49.685 13:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:49.944 13:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:49.944 13:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:50.203 13:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 18865ebf-4757-4bb6-bad1-0ce452081e52 00:10:50.463 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:50.722 [2024-11-06 13:40:31.447998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:50.722 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67056 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67056 /var/tmp/bdevperf.sock 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 67056 ']' 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:50.982 13:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:50.982 [2024-11-06 13:40:31.769801] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:10:50.982 [2024-11-06 13:40:31.769924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67056 ] 00:10:51.240 [2024-11-06 13:40:31.923256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.240 [2024-11-06 13:40:32.002335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.807 13:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:51.807 13:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:10:51.807 13:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:52.066 Nvme0n1 00:10:52.323 13:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:52.323 [ 00:10:52.323 { 00:10:52.323 "aliases": [ 00:10:52.323 "18865ebf-4757-4bb6-bad1-0ce452081e52" 00:10:52.323 ], 00:10:52.323 "assigned_rate_limits": { 00:10:52.323 "r_mbytes_per_sec": 0, 00:10:52.323 "rw_ios_per_sec": 0, 00:10:52.323 "rw_mbytes_per_sec": 0, 00:10:52.323 "w_mbytes_per_sec": 0 00:10:52.323 }, 00:10:52.323 "block_size": 4096, 00:10:52.323 "claimed": false, 00:10:52.323 "driver_specific": { 00:10:52.323 "mp_policy": "active_passive", 00:10:52.323 "nvme": [ 00:10:52.323 { 00:10:52.323 "ctrlr_data": { 00:10:52.323 "ana_reporting": false, 00:10:52.323 "cntlid": 1, 00:10:52.323 "firmware_revision": "25.01", 00:10:52.323 "model_number": "SPDK bdev Controller", 00:10:52.323 "multi_ctrlr": true, 00:10:52.323 "oacs": { 00:10:52.323 "firmware": 0, 00:10:52.323 "format": 0, 00:10:52.323 "ns_manage": 0, 00:10:52.323 "security": 0 00:10:52.323 }, 00:10:52.323 "serial_number": "SPDK0", 00:10:52.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:52.323 "vendor_id": "0x8086" 00:10:52.323 }, 00:10:52.323 "ns_data": { 00:10:52.323 "can_share": true, 00:10:52.323 "id": 1 00:10:52.323 }, 00:10:52.323 "trid": { 00:10:52.323 "adrfam": "IPv4", 00:10:52.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:52.323 "traddr": "10.0.0.3", 00:10:52.323 "trsvcid": "4420", 00:10:52.323 "trtype": "TCP" 00:10:52.323 }, 00:10:52.323 "vs": { 00:10:52.323 "nvme_version": "1.3" 00:10:52.323 } 00:10:52.323 } 00:10:52.323 ] 00:10:52.323 }, 00:10:52.323 "memory_domains": [ 00:10:52.323 { 00:10:52.323 "dma_device_id": "system", 00:10:52.323 "dma_device_type": 1 00:10:52.323 } 00:10:52.323 ], 00:10:52.323 "name": "Nvme0n1", 00:10:52.323 "num_blocks": 38912, 00:10:52.323 "numa_id": -1, 00:10:52.323 "product_name": "NVMe disk", 00:10:52.323 "supported_io_types": { 00:10:52.323 "abort": true, 00:10:52.323 "compare": true, 00:10:52.323 "compare_and_write": true, 00:10:52.323 "copy": true, 00:10:52.323 "flush": true, 00:10:52.323 "get_zone_info": false, 00:10:52.323 "nvme_admin": true, 00:10:52.323 "nvme_io": true, 00:10:52.323 "nvme_io_md": false, 00:10:52.323 "nvme_iov_md": false, 00:10:52.323 "read": true, 00:10:52.323 "reset": true, 00:10:52.323 "seek_data": false, 00:10:52.323 "seek_hole": false, 00:10:52.323 "unmap": true, 00:10:52.323 "write": true, 00:10:52.323 "write_zeroes": true, 00:10:52.323 "zcopy": false, 00:10:52.324 "zone_append": false, 00:10:52.324 "zone_management": false 00:10:52.324 }, 00:10:52.324 "uuid": "18865ebf-4757-4bb6-bad1-0ce452081e52", 00:10:52.324 "zoned": false 00:10:52.324 } 00:10:52.324 ] 00:10:52.324 13:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67098 00:10:52.324 13:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:52.324 13:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:52.582 Running I/O for 10 seconds... 00:10:53.524 Latency(us) 00:10:53.524 [2024-11-06T13:40:34.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.524 Nvme0n1 : 1.00 10540.00 41.17 0.00 0.00 0.00 0.00 0.00 00:10:53.524 [2024-11-06T13:40:34.457Z] =================================================================================================================== 00:10:53.524 [2024-11-06T13:40:34.457Z] Total : 10540.00 41.17 0.00 0.00 0.00 0.00 0.00 00:10:53.524 00:10:54.459 13:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:10:54.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.459 Nvme0n1 : 2.00 10368.00 40.50 0.00 0.00 0.00 0.00 0.00 00:10:54.459 [2024-11-06T13:40:35.392Z] =================================================================================================================== 00:10:54.459 [2024-11-06T13:40:35.392Z] Total : 10368.00 40.50 0.00 0.00 0.00 0.00 0.00 00:10:54.459 00:10:54.718 true 00:10:54.718 13:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:54.718 13:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:10:54.980 13:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:54.980 13:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:54.980 13:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67098 00:10:55.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.583 Nvme0n1 : 3.00 10145.00 39.63 0.00 0.00 0.00 0.00 0.00 00:10:55.583 [2024-11-06T13:40:36.516Z] =================================================================================================================== 00:10:55.583 [2024-11-06T13:40:36.516Z] Total : 10145.00 39.63 0.00 0.00 0.00 0.00 0.00 00:10:55.583 00:10:56.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.518 Nvme0n1 : 4.00 10029.25 39.18 0.00 0.00 0.00 0.00 0.00 00:10:56.518 [2024-11-06T13:40:37.451Z] =================================================================================================================== 00:10:56.518 [2024-11-06T13:40:37.451Z] Total : 10029.25 39.18 0.00 0.00 0.00 0.00 0.00 00:10:56.518 00:10:57.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.454 Nvme0n1 : 5.00 9810.40 38.32 0.00 0.00 0.00 0.00 0.00 00:10:57.454 [2024-11-06T13:40:38.387Z] =================================================================================================================== 00:10:57.454 [2024-11-06T13:40:38.387Z] Total : 9810.40 38.32 0.00 0.00 0.00 0.00 0.00 00:10:57.454 00:10:58.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.833 Nvme0n1 : 6.00 9185.67 35.88 0.00 0.00 0.00 0.00 0.00 00:10:58.833 [2024-11-06T13:40:39.766Z] =================================================================================================================== 00:10:58.833 [2024-11-06T13:40:39.766Z] Total : 9185.67 35.88 0.00 0.00 0.00 0.00 0.00 00:10:58.833 00:10:59.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.422 Nvme0n1 : 7.00 9188.57 35.89 0.00 0.00 0.00 0.00 0.00 00:10:59.422 [2024-11-06T13:40:40.355Z] =================================================================================================================== 00:10:59.422 [2024-11-06T13:40:40.355Z] Total : 9188.57 35.89 0.00 0.00 0.00 0.00 0.00 00:10:59.422 00:11:00.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.799 Nvme0n1 : 8.00 9242.00 36.10 0.00 0.00 0.00 0.00 0.00 00:11:00.799 [2024-11-06T13:40:41.732Z] =================================================================================================================== 00:11:00.799 [2024-11-06T13:40:41.732Z] Total : 9242.00 36.10 0.00 0.00 0.00 0.00 0.00 00:11:00.799 00:11:01.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.734 Nvme0n1 : 9.00 9200.00 35.94 0.00 0.00 0.00 0.00 0.00 00:11:01.734 [2024-11-06T13:40:42.667Z] =================================================================================================================== 00:11:01.734 [2024-11-06T13:40:42.667Z] Total : 9200.00 35.94 0.00 0.00 0.00 0.00 0.00 00:11:01.734 00:11:02.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.668 Nvme0n1 : 10.00 9251.10 36.14 0.00 0.00 0.00 0.00 0.00 00:11:02.668 [2024-11-06T13:40:43.601Z] =================================================================================================================== 00:11:02.668 [2024-11-06T13:40:43.601Z] Total : 9251.10 36.14 0.00 0.00 0.00 0.00 0.00 00:11:02.668 00:11:02.668 00:11:02.668 Latency(us) 00:11:02.668 [2024-11-06T13:40:43.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.668 Nvme0n1 : 10.01 9260.22 36.17 0.00 0.00 13815.42 3895.31 382372.29 00:11:02.668 [2024-11-06T13:40:43.601Z] =================================================================================================================== 00:11:02.668 [2024-11-06T13:40:43.601Z] Total : 9260.22 36.17 0.00 0.00 13815.42 3895.31 382372.29 00:11:02.668 { 00:11:02.668 "results": [ 00:11:02.668 { 00:11:02.668 "job": "Nvme0n1", 00:11:02.668 "core_mask": "0x2", 00:11:02.668 "workload": "randwrite", 00:11:02.668 "status": "finished", 00:11:02.668 "queue_depth": 128, 00:11:02.668 "io_size": 4096, 00:11:02.668 "runtime": 10.013479, 00:11:02.668 "iops": 9260.218151952982, 00:11:02.668 "mibps": 36.17272715606634, 00:11:02.668 "io_failed": 0, 00:11:02.668 "io_timeout": 0, 00:11:02.668 "avg_latency_us": 13815.422654756767, 00:11:02.668 "min_latency_us": 3895.3124497991967, 00:11:02.668 "max_latency_us": 382372.2923694779 00:11:02.668 } 00:11:02.668 ], 00:11:02.668 "core_count": 1 00:11:02.668 } 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67056 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 67056 ']' 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 67056 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67056 00:11:02.668 killing process with pid 67056 00:11:02.668 Received shutdown signal, test time was about 10.000000 seconds 00:11:02.668 00:11:02.668 Latency(us) 00:11:02.668 [2024-11-06T13:40:43.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.668 [2024-11-06T13:40:43.601Z] =================================================================================================================== 00:11:02.668 [2024-11-06T13:40:43.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67056' 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 67056 00:11:02.668 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 67056 00:11:02.926 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:03.185 13:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:03.444 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:03.444 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:03.702 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:03.702 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66441 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66441 00:11:03.703 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66441 Killed "${NVMF_APP[@]}" "$@" 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67261 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67261 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 67261 ']' 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:03.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:03.703 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:03.703 [2024-11-06 13:40:44.474740] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:03.703 [2024-11-06 13:40:44.474819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.703 [2024-11-06 13:40:44.613916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.963 [2024-11-06 13:40:44.667371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.963 [2024-11-06 13:40:44.667419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.963 [2024-11-06 13:40:44.667430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.963 [2024-11-06 13:40:44.667439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.963 [2024-11-06 13:40:44.667446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.963 [2024-11-06 13:40:44.667764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.963 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:03.963 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:03.963 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.963 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.963 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:03.963 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.963 13:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.223 [2024-11-06 13:40:45.043758] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:04.223 [2024-11-06 13:40:45.044115] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:04.223 [2024-11-06 13:40:45.044412] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:04.223 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:04.223 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 18865ebf-4757-4bb6-bad1-0ce452081e52 00:11:04.223 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=18865ebf-4757-4bb6-bad1-0ce452081e52 00:11:04.223 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:04.223 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:04.223 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:04.223 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:04.223 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:04.482 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 18865ebf-4757-4bb6-bad1-0ce452081e52 -t 2000 00:11:04.740 [ 00:11:04.740 { 00:11:04.740 "aliases": [ 00:11:04.740 "lvs/lvol" 00:11:04.740 ], 00:11:04.740 "assigned_rate_limits": { 00:11:04.740 "r_mbytes_per_sec": 0, 00:11:04.740 "rw_ios_per_sec": 0, 00:11:04.740 "rw_mbytes_per_sec": 0, 00:11:04.740 "w_mbytes_per_sec": 0 00:11:04.740 }, 00:11:04.740 "block_size": 4096, 00:11:04.740 "claimed": false, 00:11:04.740 "driver_specific": { 00:11:04.740 "lvol": { 00:11:04.740 "base_bdev": "aio_bdev", 00:11:04.740 "clone": false, 00:11:04.740 "esnap_clone": false, 00:11:04.740 "lvol_store_uuid": "e6c10d8d-a55c-4e9e-a505-0df12299e406", 00:11:04.740 "num_allocated_clusters": 38, 00:11:04.740 "snapshot": false, 00:11:04.740 "thin_provision": false 00:11:04.740 } 00:11:04.740 }, 00:11:04.740 "name": "18865ebf-4757-4bb6-bad1-0ce452081e52", 00:11:04.740 "num_blocks": 38912, 00:11:04.740 "product_name": "Logical Volume", 00:11:04.740 "supported_io_types": { 00:11:04.740 "abort": false, 00:11:04.740 "compare": false, 00:11:04.740 "compare_and_write": false, 00:11:04.740 "copy": false, 00:11:04.740 "flush": false, 00:11:04.740 "get_zone_info": false, 00:11:04.740 "nvme_admin": false, 00:11:04.740 "nvme_io": false, 00:11:04.740 "nvme_io_md": false, 00:11:04.741 "nvme_iov_md": false, 00:11:04.741 "read": true, 00:11:04.741 "reset": true, 00:11:04.741 "seek_data": true, 00:11:04.741 "seek_hole": true, 00:11:04.741 "unmap": true, 00:11:04.741 "write": true, 00:11:04.741 "write_zeroes": true, 00:11:04.741 "zcopy": false, 00:11:04.741 "zone_append": false, 00:11:04.741 "zone_management": false 00:11:04.741 }, 00:11:04.741 "uuid": "18865ebf-4757-4bb6-bad1-0ce452081e52", 00:11:04.741 "zoned": false 00:11:04.741 } 00:11:04.741 ] 00:11:04.741 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:04.741 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:04.741 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:04.999 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:04.999 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:04.999 13:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:05.257 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:05.257 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:05.516 [2024-11-06 13:40:46.199586] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:05.516 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:05.516 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:05.516 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:05.517 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:05.776 2024/11/06 13:40:46 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e6c10d8d-a55c-4e9e-a505-0df12299e406], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:05.776 request: 00:11:05.776 { 00:11:05.776 "method": "bdev_lvol_get_lvstores", 00:11:05.776 "params": { 00:11:05.777 "uuid": "e6c10d8d-a55c-4e9e-a505-0df12299e406" 00:11:05.777 } 00:11:05.777 } 00:11:05.777 Got JSON-RPC error response 00:11:05.777 GoRPCClient: error on JSON-RPC call 00:11:05.777 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:05.777 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:05.777 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:05.777 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:05.777 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:06.035 aio_bdev 00:11:06.035 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 18865ebf-4757-4bb6-bad1-0ce452081e52 00:11:06.035 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=18865ebf-4757-4bb6-bad1-0ce452081e52 00:11:06.035 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:06.035 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:06.035 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:06.035 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:06.035 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:06.293 13:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 18865ebf-4757-4bb6-bad1-0ce452081e52 -t 2000 00:11:06.293 [ 00:11:06.293 { 00:11:06.293 "aliases": [ 00:11:06.293 "lvs/lvol" 00:11:06.293 ], 00:11:06.293 "assigned_rate_limits": { 00:11:06.293 "r_mbytes_per_sec": 0, 00:11:06.293 "rw_ios_per_sec": 0, 00:11:06.293 "rw_mbytes_per_sec": 0, 00:11:06.293 "w_mbytes_per_sec": 0 00:11:06.293 }, 00:11:06.293 "block_size": 4096, 00:11:06.293 "claimed": false, 00:11:06.294 "driver_specific": { 00:11:06.294 "lvol": { 00:11:06.294 "base_bdev": "aio_bdev", 00:11:06.294 "clone": false, 00:11:06.294 "esnap_clone": false, 00:11:06.294 "lvol_store_uuid": "e6c10d8d-a55c-4e9e-a505-0df12299e406", 00:11:06.294 "num_allocated_clusters": 38, 00:11:06.294 "snapshot": false, 00:11:06.294 "thin_provision": false 00:11:06.294 } 00:11:06.294 }, 00:11:06.294 "name": "18865ebf-4757-4bb6-bad1-0ce452081e52", 00:11:06.294 "num_blocks": 38912, 00:11:06.294 "product_name": "Logical Volume", 00:11:06.294 "supported_io_types": { 00:11:06.294 "abort": false, 00:11:06.294 "compare": false, 00:11:06.294 "compare_and_write": false, 00:11:06.294 "copy": false, 00:11:06.294 "flush": false, 00:11:06.294 "get_zone_info": false, 00:11:06.294 "nvme_admin": false, 00:11:06.294 "nvme_io": false, 00:11:06.294 "nvme_io_md": false, 00:11:06.294 "nvme_iov_md": false, 00:11:06.294 "read": true, 00:11:06.294 "reset": true, 00:11:06.294 "seek_data": true, 00:11:06.294 "seek_hole": true, 00:11:06.294 "unmap": true, 00:11:06.294 "write": true, 00:11:06.294 "write_zeroes": true, 00:11:06.294 "zcopy": false, 00:11:06.294 "zone_append": false, 00:11:06.294 "zone_management": false 00:11:06.294 }, 00:11:06.294 "uuid": "18865ebf-4757-4bb6-bad1-0ce452081e52", 00:11:06.294 "zoned": false 00:11:06.294 } 00:11:06.294 ] 00:11:06.294 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:06.294 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:06.294 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:06.552 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:06.552 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:06.552 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:06.811 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:06.811 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 18865ebf-4757-4bb6-bad1-0ce452081e52 00:11:07.069 13:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6c10d8d-a55c-4e9e-a505-0df12299e406 00:11:07.326 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:07.584 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:08.150 00:11:08.150 real 0m19.699s 00:11:08.150 user 0m40.634s 00:11:08.150 sys 0m8.183s 00:11:08.150 ************************************ 00:11:08.150 END TEST lvs_grow_dirty 00:11:08.150 ************************************ 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:08.150 nvmf_trace.0 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.150 13:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:08.717 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.718 rmmod nvme_tcp 00:11:08.718 rmmod nvme_fabrics 00:11:08.718 rmmod nvme_keyring 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67261 ']' 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67261 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 67261 ']' 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 67261 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67261 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:08.718 killing process with pid 67261 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67261' 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 67261 00:11:08.718 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 67261 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:08.976 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:09.234 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:09.235 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.235 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.235 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:09.235 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.235 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.235 13:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.235 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:09.235 00:11:09.235 real 0m41.531s 00:11:09.235 user 1m3.270s 00:11:09.235 sys 0m12.742s 00:11:09.235 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.235 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:09.235 ************************************ 00:11:09.235 END TEST nvmf_lvs_grow 00:11:09.235 ************************************ 00:11:09.235 13:40:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:09.235 13:40:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:09.235 13:40:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.235 13:40:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.235 ************************************ 00:11:09.235 START TEST nvmf_bdev_io_wait 00:11:09.235 ************************************ 00:11:09.235 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:09.495 * Looking for test storage... 00:11:09.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:09.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.495 --rc genhtml_branch_coverage=1 00:11:09.495 --rc genhtml_function_coverage=1 00:11:09.495 --rc genhtml_legend=1 00:11:09.495 --rc geninfo_all_blocks=1 00:11:09.495 --rc geninfo_unexecuted_blocks=1 00:11:09.495 00:11:09.495 ' 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:09.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.495 --rc genhtml_branch_coverage=1 00:11:09.495 --rc genhtml_function_coverage=1 00:11:09.495 --rc genhtml_legend=1 00:11:09.495 --rc geninfo_all_blocks=1 00:11:09.495 --rc geninfo_unexecuted_blocks=1 00:11:09.495 00:11:09.495 ' 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:09.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.495 --rc genhtml_branch_coverage=1 00:11:09.495 --rc genhtml_function_coverage=1 00:11:09.495 --rc genhtml_legend=1 00:11:09.495 --rc geninfo_all_blocks=1 00:11:09.495 --rc geninfo_unexecuted_blocks=1 00:11:09.495 00:11:09.495 ' 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:09.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.495 --rc genhtml_branch_coverage=1 00:11:09.495 --rc genhtml_function_coverage=1 00:11:09.495 --rc genhtml_legend=1 00:11:09.495 --rc geninfo_all_blocks=1 00:11:09.495 --rc geninfo_unexecuted_blocks=1 00:11:09.495 00:11:09.495 ' 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.495 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.496 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.496 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:09.756 Cannot find device "nvmf_init_br" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:09.756 Cannot find device "nvmf_init_br2" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:09.756 Cannot find device "nvmf_tgt_br" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.756 Cannot find device "nvmf_tgt_br2" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:09.756 Cannot find device "nvmf_init_br" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:09.756 Cannot find device "nvmf_init_br2" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:09.756 Cannot find device "nvmf_tgt_br" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:09.756 Cannot find device "nvmf_tgt_br2" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:09.756 Cannot find device "nvmf_br" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:09.756 Cannot find device "nvmf_init_if" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:09.756 Cannot find device "nvmf_init_if2" 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:09.756 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:10.015 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:10.016 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:10.016 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:10.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:10.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:11:10.016 00:11:10.016 --- 10.0.0.3 ping statistics --- 00:11:10.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.016 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:10.016 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:10.016 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:10.016 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:11:10.016 00:11:10.016 --- 10.0.0.4 ping statistics --- 00:11:10.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.016 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:10.016 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:10.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:11:10.274 00:11:10.274 --- 10.0.0.1 ping statistics --- 00:11:10.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.274 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:10.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:11:10.274 00:11:10.274 --- 10.0.0.2 ping statistics --- 00:11:10.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.274 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67721 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67721 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 67721 ']' 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:10.274 13:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.274 [2024-11-06 13:40:51.058436] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:10.274 [2024-11-06 13:40:51.058526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.533 [2024-11-06 13:40:51.215159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.533 [2024-11-06 13:40:51.295809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.533 [2024-11-06 13:40:51.295878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.533 [2024-11-06 13:40:51.295889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.533 [2024-11-06 13:40:51.295898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.533 [2024-11-06 13:40:51.295907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.533 [2024-11-06 13:40:51.297438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.533 [2024-11-06 13:40:51.297510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.533 [2024-11-06 13:40:51.297609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.533 [2024-11-06 13:40:51.297610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.101 13:40:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:11.101 13:40:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:11:11.101 13:40:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.101 13:40:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.101 13:40:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.101 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.101 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:11.101 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.101 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 [2024-11-06 13:40:52.137650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 Malloc0 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.361 [2024-11-06 13:40:52.211937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67775 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:11.361 { 00:11:11.361 "params": { 00:11:11.361 "name": "Nvme$subsystem", 00:11:11.361 "trtype": "$TEST_TRANSPORT", 00:11:11.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.361 "adrfam": "ipv4", 00:11:11.361 "trsvcid": "$NVMF_PORT", 00:11:11.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.361 "hdgst": ${hdgst:-false}, 00:11:11.361 "ddgst": ${ddgst:-false} 00:11:11.361 }, 00:11:11.361 "method": "bdev_nvme_attach_controller" 00:11:11.361 } 00:11:11.361 EOF 00:11:11.361 )") 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67777 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67780 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:11.361 { 00:11:11.361 "params": { 00:11:11.361 "name": "Nvme$subsystem", 00:11:11.361 "trtype": "$TEST_TRANSPORT", 00:11:11.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.361 "adrfam": "ipv4", 00:11:11.361 "trsvcid": "$NVMF_PORT", 00:11:11.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.361 "hdgst": ${hdgst:-false}, 00:11:11.361 "ddgst": ${ddgst:-false} 00:11:11.361 }, 00:11:11.361 "method": "bdev_nvme_attach_controller" 00:11:11.361 } 00:11:11.361 EOF 00:11:11.361 )") 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67783 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:11.361 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:11.361 { 00:11:11.361 "params": { 00:11:11.361 "name": "Nvme$subsystem", 00:11:11.361 "trtype": "$TEST_TRANSPORT", 00:11:11.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.361 "adrfam": "ipv4", 00:11:11.361 "trsvcid": "$NVMF_PORT", 00:11:11.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.361 "hdgst": ${hdgst:-false}, 00:11:11.361 "ddgst": ${ddgst:-false} 00:11:11.361 }, 00:11:11.361 "method": "bdev_nvme_attach_controller" 00:11:11.361 } 00:11:11.361 EOF 00:11:11.361 )") 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:11.362 { 00:11:11.362 "params": { 00:11:11.362 "name": "Nvme$subsystem", 00:11:11.362 "trtype": "$TEST_TRANSPORT", 00:11:11.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.362 "adrfam": "ipv4", 00:11:11.362 "trsvcid": "$NVMF_PORT", 00:11:11.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.362 "hdgst": ${hdgst:-false}, 00:11:11.362 "ddgst": ${ddgst:-false} 00:11:11.362 }, 00:11:11.362 "method": "bdev_nvme_attach_controller" 00:11:11.362 } 00:11:11.362 EOF 00:11:11.362 )") 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:11.362 "params": { 00:11:11.362 "name": "Nvme1", 00:11:11.362 "trtype": "tcp", 00:11:11.362 "traddr": "10.0.0.3", 00:11:11.362 "adrfam": "ipv4", 00:11:11.362 "trsvcid": "4420", 00:11:11.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.362 "hdgst": false, 00:11:11.362 "ddgst": false 00:11:11.362 }, 00:11:11.362 "method": "bdev_nvme_attach_controller" 00:11:11.362 }' 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:11.362 "params": { 00:11:11.362 "name": "Nvme1", 00:11:11.362 "trtype": "tcp", 00:11:11.362 "traddr": "10.0.0.3", 00:11:11.362 "adrfam": "ipv4", 00:11:11.362 "trsvcid": "4420", 00:11:11.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.362 "hdgst": false, 00:11:11.362 "ddgst": false 00:11:11.362 }, 00:11:11.362 "method": "bdev_nvme_attach_controller" 00:11:11.362 }' 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:11.362 "params": { 00:11:11.362 "name": "Nvme1", 00:11:11.362 "trtype": "tcp", 00:11:11.362 "traddr": "10.0.0.3", 00:11:11.362 "adrfam": "ipv4", 00:11:11.362 "trsvcid": "4420", 00:11:11.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.362 "hdgst": false, 00:11:11.362 "ddgst": false 00:11:11.362 }, 00:11:11.362 "method": "bdev_nvme_attach_controller" 00:11:11.362 }' 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:11.362 "params": { 00:11:11.362 "name": "Nvme1", 00:11:11.362 "trtype": "tcp", 00:11:11.362 "traddr": "10.0.0.3", 00:11:11.362 "adrfam": "ipv4", 00:11:11.362 "trsvcid": "4420", 00:11:11.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.362 "hdgst": false, 00:11:11.362 "ddgst": false 00:11:11.362 }, 00:11:11.362 "method": "bdev_nvme_attach_controller" 00:11:11.362 }' 00:11:11.362 [2024-11-06 13:40:52.280457] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:11.362 [2024-11-06 13:40:52.280737] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:11.362 13:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67775 00:11:11.621 [2024-11-06 13:40:52.295100] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:11.621 [2024-11-06 13:40:52.295368] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:11.621 [2024-11-06 13:40:52.299278] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:11.621 [2024-11-06 13:40:52.299341] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:11.621 [2024-11-06 13:40:52.300621] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:11.621 [2024-11-06 13:40:52.300963] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:11.621 [2024-11-06 13:40:52.528823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.959 [2024-11-06 13:40:52.592388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:11.959 [2024-11-06 13:40:52.633006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.959 [2024-11-06 13:40:52.697487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:11.959 [2024-11-06 13:40:52.736788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.959 [2024-11-06 13:40:52.802116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:11.959 [2024-11-06 13:40:52.865596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.218 Running I/O for 1 seconds... 00:11:12.218 [2024-11-06 13:40:52.926616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:12.218 Running I/O for 1 seconds... 00:11:12.218 Running I/O for 1 seconds... 00:11:12.218 Running I/O for 1 seconds... 00:11:13.153 6416.00 IOPS, 25.06 MiB/s 00:11:13.153 Latency(us) 00:11:13.153 [2024-11-06T13:40:54.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.153 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:13.153 Nvme1n1 : 1.02 6422.61 25.09 0.00 0.00 19656.28 7843.26 31583.61 00:11:13.153 [2024-11-06T13:40:54.086Z] =================================================================================================================== 00:11:13.153 [2024-11-06T13:40:54.086Z] Total : 6422.61 25.09 0.00 0.00 19656.28 7843.26 31583.61 00:11:13.153 9820.00 IOPS, 38.36 MiB/s 00:11:13.153 Latency(us) 00:11:13.153 [2024-11-06T13:40:54.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.153 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:13.153 Nvme1n1 : 1.01 9880.51 38.60 0.00 0.00 12901.52 5342.89 21792.69 00:11:13.153 [2024-11-06T13:40:54.086Z] =================================================================================================================== 00:11:13.153 [2024-11-06T13:40:54.086Z] Total : 9880.51 38.60 0.00 0.00 12901.52 5342.89 21792.69 00:11:13.153 6258.00 IOPS, 24.45 MiB/s 00:11:13.153 Latency(us) 00:11:13.153 [2024-11-06T13:40:54.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.153 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:13.153 Nvme1n1 : 1.01 6372.88 24.89 0.00 0.00 20038.07 3158.36 41900.93 00:11:13.153 [2024-11-06T13:40:54.086Z] =================================================================================================================== 00:11:13.153 [2024-11-06T13:40:54.086Z] Total : 6372.88 24.89 0.00 0.00 20038.07 3158.36 41900.93 00:11:13.412 216960.00 IOPS, 847.50 MiB/s 00:11:13.412 Latency(us) 00:11:13.412 [2024-11-06T13:40:54.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.412 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:13.412 Nvme1n1 : 1.00 216593.08 846.07 0.00 0.00 588.02 289.52 2224.01 00:11:13.412 [2024-11-06T13:40:54.345Z] =================================================================================================================== 00:11:13.412 [2024-11-06T13:40:54.345Z] Total : 216593.08 846.07 0.00 0.00 588.02 289.52 2224.01 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67777 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67780 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67783 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.412 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.671 rmmod nvme_tcp 00:11:13.671 rmmod nvme_fabrics 00:11:13.671 rmmod nvme_keyring 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67721 ']' 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67721 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 67721 ']' 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 67721 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67721 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:13.671 killing process with pid 67721 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67721' 00:11:13.671 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 67721 00:11:13.672 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 67721 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:13.930 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:14.189 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:14.189 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:14.189 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:14.189 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:14.189 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:14.189 13:40:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:14.189 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:14.189 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.189 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.189 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.189 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:14.189 00:11:14.189 real 0m4.954s 00:11:14.189 user 0m18.770s 00:11:14.189 sys 0m2.649s 00:11:14.189 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.189 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.189 ************************************ 00:11:14.189 END TEST nvmf_bdev_io_wait 00:11:14.189 ************************************ 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.449 ************************************ 00:11:14.449 START TEST nvmf_queue_depth 00:11:14.449 ************************************ 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:14.449 * Looking for test storage... 00:11:14.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.449 --rc genhtml_branch_coverage=1 00:11:14.449 --rc genhtml_function_coverage=1 00:11:14.449 --rc genhtml_legend=1 00:11:14.449 --rc geninfo_all_blocks=1 00:11:14.449 --rc geninfo_unexecuted_blocks=1 00:11:14.449 00:11:14.449 ' 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.449 --rc genhtml_branch_coverage=1 00:11:14.449 --rc genhtml_function_coverage=1 00:11:14.449 --rc genhtml_legend=1 00:11:14.449 --rc geninfo_all_blocks=1 00:11:14.449 --rc geninfo_unexecuted_blocks=1 00:11:14.449 00:11:14.449 ' 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.449 --rc genhtml_branch_coverage=1 00:11:14.449 --rc genhtml_function_coverage=1 00:11:14.449 --rc genhtml_legend=1 00:11:14.449 --rc geninfo_all_blocks=1 00:11:14.449 --rc geninfo_unexecuted_blocks=1 00:11:14.449 00:11:14.449 ' 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.449 --rc genhtml_branch_coverage=1 00:11:14.449 --rc genhtml_function_coverage=1 00:11:14.449 --rc genhtml_legend=1 00:11:14.449 --rc geninfo_all_blocks=1 00:11:14.449 --rc geninfo_unexecuted_blocks=1 00:11:14.449 00:11:14.449 ' 00:11:14.449 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.709 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.709 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:14.710 Cannot find device "nvmf_init_br" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:14.710 Cannot find device "nvmf_init_br2" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:14.710 Cannot find device "nvmf_tgt_br" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:14.710 Cannot find device "nvmf_tgt_br2" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:14.710 Cannot find device "nvmf_init_br" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:14.710 Cannot find device "nvmf_init_br2" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:14.710 Cannot find device "nvmf_tgt_br" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:14.710 Cannot find device "nvmf_tgt_br2" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:14.710 Cannot find device "nvmf_br" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:14.710 Cannot find device "nvmf_init_if" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:14.710 Cannot find device "nvmf_init_if2" 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:14.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:14.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:14.710 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:14.970 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:15.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:15.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:11:15.229 00:11:15.229 --- 10.0.0.3 ping statistics --- 00:11:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.229 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:15.229 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:15.229 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:11:15.229 00:11:15.229 --- 10.0.0.4 ping statistics --- 00:11:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.229 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:15.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:11:15.229 00:11:15.229 --- 10.0.0.1 ping statistics --- 00:11:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.229 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:15.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:11:15.229 00:11:15.229 --- 10.0.0.2 ping statistics --- 00:11:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.229 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:15.229 13:40:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68071 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68071 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 68071 ']' 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:15.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:15.229 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:15.229 [2024-11-06 13:40:56.073375] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:15.229 [2024-11-06 13:40:56.073460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.488 [2024-11-06 13:40:56.232188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.488 [2024-11-06 13:40:56.296691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.488 [2024-11-06 13:40:56.296749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.488 [2024-11-06 13:40:56.296759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.488 [2024-11-06 13:40:56.296767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.488 [2024-11-06 13:40:56.296774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.488 [2024-11-06 13:40:56.297160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.056 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:16.056 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:16.056 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.056 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:16.056 13:40:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 [2024-11-06 13:40:57.037380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 Malloc0 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 [2024-11-06 13:40:57.107483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.315 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68121 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68121 /var/tmp/bdevperf.sock 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 68121 ']' 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:16.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:16.316 13:40:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.316 [2024-11-06 13:40:57.170282] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:16.316 [2024-11-06 13:40:57.170362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68121 ] 00:11:16.575 [2024-11-06 13:40:57.325994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.575 [2024-11-06 13:40:57.397433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.513 13:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:17.513 13:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:17.513 13:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:17.513 13:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.513 13:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.513 NVMe0n1 00:11:17.513 13:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.513 13:40:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:17.513 Running I/O for 10 seconds... 00:11:19.382 10555.00 IOPS, 41.23 MiB/s [2024-11-06T13:41:01.283Z] 10876.50 IOPS, 42.49 MiB/s [2024-11-06T13:41:02.673Z] 11047.00 IOPS, 43.15 MiB/s [2024-11-06T13:41:03.608Z] 11143.75 IOPS, 43.53 MiB/s [2024-11-06T13:41:04.545Z] 11165.20 IOPS, 43.61 MiB/s [2024-11-06T13:41:05.481Z] 11242.67 IOPS, 43.92 MiB/s [2024-11-06T13:41:06.418Z] 11338.14 IOPS, 44.29 MiB/s [2024-11-06T13:41:07.354Z] 11394.00 IOPS, 44.51 MiB/s [2024-11-06T13:41:08.320Z] 11454.33 IOPS, 44.74 MiB/s [2024-11-06T13:41:08.320Z] 11480.30 IOPS, 44.84 MiB/s 00:11:27.387 Latency(us) 00:11:27.387 [2024-11-06T13:41:08.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.388 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:27.388 Verification LBA range: start 0x0 length 0x4000 00:11:27.388 NVMe0n1 : 10.06 11514.68 44.98 0.00 0.00 88608.87 15160.13 61482.77 00:11:27.388 [2024-11-06T13:41:08.321Z] =================================================================================================================== 00:11:27.388 [2024-11-06T13:41:08.321Z] Total : 11514.68 44.98 0.00 0.00 88608.87 15160.13 61482.77 00:11:27.388 { 00:11:27.388 "results": [ 00:11:27.388 { 00:11:27.388 "job": "NVMe0n1", 00:11:27.388 "core_mask": "0x1", 00:11:27.388 "workload": "verify", 00:11:27.388 "status": "finished", 00:11:27.388 "verify_range": { 00:11:27.388 "start": 0, 00:11:27.388 "length": 16384 00:11:27.388 }, 00:11:27.388 "queue_depth": 1024, 00:11:27.388 "io_size": 4096, 00:11:27.388 "runtime": 10.057078, 00:11:27.388 "iops": 11514.676529306027, 00:11:27.388 "mibps": 44.97920519260167, 00:11:27.388 "io_failed": 0, 00:11:27.388 "io_timeout": 0, 00:11:27.388 "avg_latency_us": 88608.87165742865, 00:11:27.388 "min_latency_us": 15160.134939759037, 00:11:27.388 "max_latency_us": 61482.76947791165 00:11:27.388 } 00:11:27.388 ], 00:11:27.388 "core_count": 1 00:11:27.388 } 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68121 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 68121 ']' 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 68121 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68121 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:27.646 killing process with pid 68121 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68121' 00:11:27.646 Received shutdown signal, test time was about 10.000000 seconds 00:11:27.646 00:11:27.646 Latency(us) 00:11:27.646 [2024-11-06T13:41:08.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.646 [2024-11-06T13:41:08.579Z] =================================================================================================================== 00:11:27.646 [2024-11-06T13:41:08.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 68121 00:11:27.646 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 68121 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.905 rmmod nvme_tcp 00:11:27.905 rmmod nvme_fabrics 00:11:27.905 rmmod nvme_keyring 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68071 ']' 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68071 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 68071 ']' 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 68071 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.905 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68071 00:11:28.164 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:28.164 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:28.164 killing process with pid 68071 00:11:28.164 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68071' 00:11:28.164 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 68071 00:11:28.164 13:41:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 68071 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:28.422 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:28.679 00:11:28.679 real 0m14.342s 00:11:28.679 user 0m23.299s 00:11:28.679 sys 0m2.754s 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.679 ************************************ 00:11:28.679 END TEST nvmf_queue_depth 00:11:28.679 ************************************ 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.679 13:41:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:28.679 ************************************ 00:11:28.679 START TEST nvmf_target_multipath 00:11:28.679 ************************************ 00:11:28.680 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:28.939 * Looking for test storage... 00:11:28.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.939 --rc genhtml_branch_coverage=1 00:11:28.939 --rc genhtml_function_coverage=1 00:11:28.939 --rc genhtml_legend=1 00:11:28.939 --rc geninfo_all_blocks=1 00:11:28.939 --rc geninfo_unexecuted_blocks=1 00:11:28.939 00:11:28.939 ' 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.939 --rc genhtml_branch_coverage=1 00:11:28.939 --rc genhtml_function_coverage=1 00:11:28.939 --rc genhtml_legend=1 00:11:28.939 --rc geninfo_all_blocks=1 00:11:28.939 --rc geninfo_unexecuted_blocks=1 00:11:28.939 00:11:28.939 ' 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.939 --rc genhtml_branch_coverage=1 00:11:28.939 --rc genhtml_function_coverage=1 00:11:28.939 --rc genhtml_legend=1 00:11:28.939 --rc geninfo_all_blocks=1 00:11:28.939 --rc geninfo_unexecuted_blocks=1 00:11:28.939 00:11:28.939 ' 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.939 --rc genhtml_branch_coverage=1 00:11:28.939 --rc genhtml_function_coverage=1 00:11:28.939 --rc genhtml_legend=1 00:11:28.939 --rc geninfo_all_blocks=1 00:11:28.939 --rc geninfo_unexecuted_blocks=1 00:11:28.939 00:11:28.939 ' 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.939 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:28.940 Cannot find device "nvmf_init_br" 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:28.940 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:29.200 Cannot find device "nvmf_init_br2" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:29.200 Cannot find device "nvmf_tgt_br" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.200 Cannot find device "nvmf_tgt_br2" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:29.200 Cannot find device "nvmf_init_br" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:29.200 Cannot find device "nvmf_init_br2" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:29.200 Cannot find device "nvmf_tgt_br" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:29.200 Cannot find device "nvmf_tgt_br2" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:29.200 Cannot find device "nvmf_br" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:29.200 Cannot find device "nvmf_init_if" 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:29.200 13:41:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:29.200 Cannot find device "nvmf_init_if2" 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:29.200 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:29.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:29.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:11:29.461 00:11:29.461 --- 10.0.0.3 ping statistics --- 00:11:29.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.461 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:29.461 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:29.461 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:11:29.461 00:11:29.461 --- 10.0.0.4 ping statistics --- 00:11:29.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.461 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:29.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:29.461 00:11:29.461 --- 10.0.0.1 ping statistics --- 00:11:29.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.461 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:29.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:11:29.461 00:11:29.461 --- 10.0.0.2 ping statistics --- 00:11:29.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.461 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68515 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68515 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 68515 ']' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:29.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:29.461 13:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:29.720 [2024-11-06 13:41:10.423460] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:29.720 [2024-11-06 13:41:10.423539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.720 [2024-11-06 13:41:10.576311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.720 [2024-11-06 13:41:10.651867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.720 [2024-11-06 13:41:10.651934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.720 [2024-11-06 13:41:10.651945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.720 [2024-11-06 13:41:10.651955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.720 [2024-11-06 13:41:10.651962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.006 [2024-11-06 13:41:10.653341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.006 [2024-11-06 13:41:10.653430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.006 [2024-11-06 13:41:10.653520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.006 [2024-11-06 13:41:10.653523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.572 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:30.572 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:11:30.572 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.572 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.572 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:30.572 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.572 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:30.830 [2024-11-06 13:41:11.593461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.830 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:31.088 Malloc0 00:11:31.088 13:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:31.347 13:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:31.606 13:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:31.865 [2024-11-06 13:41:12.598434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:31.865 13:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:32.123 [2024-11-06 13:41:12.826258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:32.123 13:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:32.382 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:32.382 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.382 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:11:32.382 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.382 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:32.382 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68658 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:34.917 13:41:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:34.917 [global] 00:11:34.917 thread=1 00:11:34.917 invalidate=1 00:11:34.917 rw=randrw 00:11:34.917 time_based=1 00:11:34.917 runtime=6 00:11:34.917 ioengine=libaio 00:11:34.917 direct=1 00:11:34.917 bs=4096 00:11:34.917 iodepth=128 00:11:34.917 norandommap=0 00:11:34.917 numjobs=1 00:11:34.917 00:11:34.917 verify_dump=1 00:11:34.917 verify_backlog=512 00:11:34.917 verify_state_save=0 00:11:34.917 do_verify=1 00:11:34.917 verify=crc32c-intel 00:11:34.917 [job0] 00:11:34.917 filename=/dev/nvme0n1 00:11:34.917 Could not set queue depth (nvme0n1) 00:11:34.917 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:34.917 fio-3.35 00:11:34.917 Starting 1 thread 00:11:35.484 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:35.743 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:36.037 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:36.038 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:36.038 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:36.974 13:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:36.974 13:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:36.974 13:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:36.974 13:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:37.233 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:37.492 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:38.428 13:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:38.428 13:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:38.428 13:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:38.428 13:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68658 00:11:41.006 00:11:41.006 job0: (groupid=0, jobs=1): err= 0: pid=68680: Wed Nov 6 13:41:21 2024 00:11:41.006 read: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(320MiB/6003msec) 00:11:41.006 slat (usec): min=4, max=4897, avg=39.69, stdev=155.63 00:11:41.006 clat (usec): min=392, max=21118, avg=6467.71, stdev=1309.37 00:11:41.006 lat (usec): min=429, max=21136, avg=6507.39, stdev=1313.86 00:11:41.006 clat percentiles (usec): 00:11:41.006 | 1.00th=[ 3916], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 5604], 00:11:41.006 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6325], 60.00th=[ 6521], 00:11:41.006 | 70.00th=[ 6783], 80.00th=[ 7111], 90.00th=[ 7767], 95.00th=[ 8848], 00:11:41.006 | 99.00th=[11469], 99.50th=[12387], 99.90th=[14615], 99.95th=[16581], 00:11:41.006 | 99.99th=[21103] 00:11:41.006 bw ( KiB/s): min=14176, max=36344, per=50.57%, avg=27568.00, stdev=7527.33, samples=11 00:11:41.006 iops : min= 3544, max= 9086, avg=6892.00, stdev=1881.83, samples=11 00:11:41.006 write: IOPS=7891, BW=30.8MiB/s (32.3MB/s)(162MiB/5253msec); 0 zone resets 00:11:41.006 slat (usec): min=9, max=2710, avg=53.48, stdev=103.10 00:11:41.006 clat (usec): min=192, max=19571, avg=5588.70, stdev=1254.30 00:11:41.006 lat (usec): min=366, max=20973, avg=5642.18, stdev=1260.46 00:11:41.006 clat percentiles (usec): 00:11:41.006 | 1.00th=[ 3228], 5.00th=[ 4015], 10.00th=[ 4424], 20.00th=[ 4883], 00:11:41.006 | 30.00th=[ 5145], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5669], 00:11:41.006 | 70.00th=[ 5800], 80.00th=[ 6063], 90.00th=[ 6521], 95.00th=[ 7832], 00:11:41.006 | 99.00th=[10552], 99.50th=[11338], 99.90th=[13829], 99.95th=[15401], 00:11:41.006 | 99.99th=[19268] 00:11:41.006 bw ( KiB/s): min=14760, max=35688, per=87.25%, avg=27541.82, stdev=7206.03, samples=11 00:11:41.006 iops : min= 3690, max= 8922, avg=6885.45, stdev=1801.51, samples=11 00:11:41.006 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:11:41.006 lat (msec) : 2=0.13%, 4=2.27%, 10=95.61%, 20=1.91%, 50=0.01% 00:11:41.006 cpu : usr=7.60%, sys=33.25%, ctx=9518, majf=0, minf=90 00:11:41.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:41.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.006 issued rwts: total=81813,41454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.006 00:11:41.006 Run status group 0 (all jobs): 00:11:41.006 READ: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=320MiB (335MB), run=6003-6003msec 00:11:41.006 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=162MiB (170MB), run=5253-5253msec 00:11:41.006 00:11:41.006 Disk stats (read/write): 00:11:41.006 nvme0n1: ios=80506/40821, merge=0/0, ticks=461916/198262, in_queue=660178, util=98.63% 00:11:41.006 13:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:41.266 13:41:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:41.525 13:41:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:42.499 13:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:42.499 13:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:42.499 13:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:42.499 13:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:42.499 13:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68809 00:11:42.499 13:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:42.499 13:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:42.499 [global] 00:11:42.499 thread=1 00:11:42.499 invalidate=1 00:11:42.499 rw=randrw 00:11:42.499 time_based=1 00:11:42.499 runtime=6 00:11:42.499 ioengine=libaio 00:11:42.499 direct=1 00:11:42.499 bs=4096 00:11:42.499 iodepth=128 00:11:42.499 norandommap=0 00:11:42.499 numjobs=1 00:11:42.499 00:11:42.499 verify_dump=1 00:11:42.499 verify_backlog=512 00:11:42.499 verify_state_save=0 00:11:42.499 do_verify=1 00:11:42.499 verify=crc32c-intel 00:11:42.499 [job0] 00:11:42.499 filename=/dev/nvme0n1 00:11:42.499 Could not set queue depth (nvme0n1) 00:11:42.499 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:42.499 fio-3.35 00:11:42.499 Starting 1 thread 00:11:43.434 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:43.693 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:43.952 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:44.888 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:44.888 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:44.888 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:44.888 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:45.148 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:45.406 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:45.406 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:45.407 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:46.344 13:41:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:46.344 13:41:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:46.344 13:41:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:46.344 13:41:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68809 00:11:48.878 00:11:48.878 job0: (groupid=0, jobs=1): err= 0: pid=68840: Wed Nov 6 13:41:29 2024 00:11:48.878 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(323MiB/6002msec) 00:11:48.878 slat (usec): min=4, max=8696, avg=35.05, stdev=141.98 00:11:48.878 clat (usec): min=257, max=28885, avg=6437.06, stdev=1700.33 00:11:48.878 lat (usec): min=283, max=29454, avg=6472.12, stdev=1706.68 00:11:48.878 clat percentiles (usec): 00:11:48.878 | 1.00th=[ 2409], 5.00th=[ 4015], 10.00th=[ 4621], 20.00th=[ 5473], 00:11:48.878 | 30.00th=[ 5866], 40.00th=[ 6194], 50.00th=[ 6456], 60.00th=[ 6718], 00:11:48.878 | 70.00th=[ 6980], 80.00th=[ 7308], 90.00th=[ 7767], 95.00th=[ 8848], 00:11:48.878 | 99.00th=[10945], 99.50th=[13829], 99.90th=[20841], 99.95th=[24511], 00:11:48.878 | 99.99th=[27395] 00:11:48.878 bw ( KiB/s): min=13984, max=40240, per=51.06%, avg=28116.36, stdev=8956.93, samples=11 00:11:48.878 iops : min= 3496, max=10060, avg=7029.09, stdev=2239.23, samples=11 00:11:48.878 write: IOPS=8252, BW=32.2MiB/s (33.8MB/s)(164MiB/5095msec); 0 zone resets 00:11:48.878 slat (usec): min=12, max=1980, avg=49.61, stdev=90.07 00:11:48.878 clat (usec): min=162, max=24966, avg=5429.68, stdev=1614.58 00:11:48.878 lat (usec): min=210, max=25001, avg=5479.29, stdev=1621.77 00:11:48.878 clat percentiles (usec): 00:11:48.878 | 1.00th=[ 2147], 5.00th=[ 3130], 10.00th=[ 3589], 20.00th=[ 4228], 00:11:48.878 | 30.00th=[ 4817], 40.00th=[ 5276], 50.00th=[ 5538], 60.00th=[ 5800], 00:11:48.878 | 70.00th=[ 5997], 80.00th=[ 6259], 90.00th=[ 6652], 95.00th=[ 7373], 00:11:48.878 | 99.00th=[10552], 99.50th=[15139], 99.90th=[17433], 99.95th=[18482], 00:11:48.878 | 99.99th=[23725] 00:11:48.878 bw ( KiB/s): min=14176, max=40736, per=85.16%, avg=28110.55, stdev=8727.18, samples=11 00:11:48.878 iops : min= 3544, max=10184, avg=7027.64, stdev=2181.80, samples=11 00:11:48.878 lat (usec) : 250=0.01%, 500=0.04%, 750=0.11%, 1000=0.14% 00:11:48.878 lat (msec) : 2=0.54%, 4=7.94%, 10=89.49%, 20=1.63%, 50=0.10% 00:11:48.878 cpu : usr=7.44%, sys=34.00%, ctx=10964, majf=0, minf=139 00:11:48.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:48.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.878 issued rwts: total=82625,42045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.878 00:11:48.878 Run status group 0 (all jobs): 00:11:48.878 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=323MiB (338MB), run=6002-6002msec 00:11:48.878 WRITE: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=164MiB (172MB), run=5095-5095msec 00:11:48.878 00:11:48.878 Disk stats (read/write): 00:11:48.878 nvme0n1: ios=81756/41331, merge=0/0, ticks=462340/188975, in_queue=651315, util=98.63% 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:11:48.878 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.136 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.136 rmmod nvme_tcp 00:11:49.136 rmmod nvme_fabrics 00:11:49.136 rmmod nvme_keyring 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68515 ']' 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68515 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 68515 ']' 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 68515 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:49.136 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68515 00:11:49.395 killing process with pid 68515 00:11:49.395 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.395 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.395 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68515' 00:11:49.395 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 68515 00:11:49.395 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 68515 00:11:49.654 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.654 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.654 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.654 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:49.654 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:49.654 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:49.655 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:49.914 00:11:49.914 real 0m21.155s 00:11:49.914 user 1m19.385s 00:11:49.914 sys 0m9.472s 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 ************************************ 00:11:49.914 END TEST nvmf_target_multipath 00:11:49.914 ************************************ 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 ************************************ 00:11:49.914 START TEST nvmf_zcopy 00:11:49.914 ************************************ 00:11:49.914 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:50.174 * Looking for test storage... 00:11:50.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:50.174 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:50.174 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:11:50.174 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:50.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.174 --rc genhtml_branch_coverage=1 00:11:50.174 --rc genhtml_function_coverage=1 00:11:50.174 --rc genhtml_legend=1 00:11:50.174 --rc geninfo_all_blocks=1 00:11:50.174 --rc geninfo_unexecuted_blocks=1 00:11:50.174 00:11:50.174 ' 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:50.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.174 --rc genhtml_branch_coverage=1 00:11:50.174 --rc genhtml_function_coverage=1 00:11:50.174 --rc genhtml_legend=1 00:11:50.174 --rc geninfo_all_blocks=1 00:11:50.174 --rc geninfo_unexecuted_blocks=1 00:11:50.174 00:11:50.174 ' 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:50.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.174 --rc genhtml_branch_coverage=1 00:11:50.174 --rc genhtml_function_coverage=1 00:11:50.174 --rc genhtml_legend=1 00:11:50.174 --rc geninfo_all_blocks=1 00:11:50.174 --rc geninfo_unexecuted_blocks=1 00:11:50.174 00:11:50.174 ' 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:50.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.174 --rc genhtml_branch_coverage=1 00:11:50.174 --rc genhtml_function_coverage=1 00:11:50.174 --rc genhtml_legend=1 00:11:50.174 --rc geninfo_all_blocks=1 00:11:50.174 --rc geninfo_unexecuted_blocks=1 00:11:50.174 00:11:50.174 ' 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.174 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.175 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:50.434 Cannot find device "nvmf_init_br" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:50.434 Cannot find device "nvmf_init_br2" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:50.434 Cannot find device "nvmf_tgt_br" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:50.434 Cannot find device "nvmf_tgt_br2" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:50.434 Cannot find device "nvmf_init_br" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:50.434 Cannot find device "nvmf_init_br2" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:50.434 Cannot find device "nvmf_tgt_br" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:50.434 Cannot find device "nvmf_tgt_br2" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:50.434 Cannot find device "nvmf_br" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:50.434 Cannot find device "nvmf_init_if" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:50.434 Cannot find device "nvmf_init_if2" 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:50.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:50.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:50.434 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:50.694 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:50.953 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:50.953 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:50.953 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:50.953 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:50.953 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:50.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:50.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:11:50.954 00:11:50.954 --- 10.0.0.3 ping statistics --- 00:11:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.954 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:50.954 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:50.954 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:11:50.954 00:11:50.954 --- 10.0.0.4 ping statistics --- 00:11:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.954 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:50.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:11:50.954 00:11:50.954 --- 10.0.0.1 ping statistics --- 00:11:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.954 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:50.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:11:50.954 00:11:50.954 --- 10.0.0.2 ping statistics --- 00:11:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.954 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69177 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69177 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 69177 ']' 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:50.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:50.954 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:50.954 [2024-11-06 13:41:31.786665] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:50.954 [2024-11-06 13:41:31.786753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.213 [2024-11-06 13:41:31.940130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.213 [2024-11-06 13:41:32.006493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.213 [2024-11-06 13:41:32.006544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.213 [2024-11-06 13:41:32.006554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.213 [2024-11-06 13:41:32.006563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.213 [2024-11-06 13:41:32.006570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.213 [2024-11-06 13:41:32.006956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.781 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.781 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:11:51.781 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.781 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.781 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.040 [2024-11-06 13:41:32.768831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.040 [2024-11-06 13:41:32.792947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.040 malloc0 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:52.040 { 00:11:52.040 "params": { 00:11:52.040 "name": "Nvme$subsystem", 00:11:52.040 "trtype": "$TEST_TRANSPORT", 00:11:52.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.040 "adrfam": "ipv4", 00:11:52.040 "trsvcid": "$NVMF_PORT", 00:11:52.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.040 "hdgst": ${hdgst:-false}, 00:11:52.040 "ddgst": ${ddgst:-false} 00:11:52.040 }, 00:11:52.040 "method": "bdev_nvme_attach_controller" 00:11:52.040 } 00:11:52.040 EOF 00:11:52.040 )") 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:52.040 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:52.040 "params": { 00:11:52.040 "name": "Nvme1", 00:11:52.040 "trtype": "tcp", 00:11:52.040 "traddr": "10.0.0.3", 00:11:52.040 "adrfam": "ipv4", 00:11:52.040 "trsvcid": "4420", 00:11:52.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.040 "hdgst": false, 00:11:52.040 "ddgst": false 00:11:52.040 }, 00:11:52.040 "method": "bdev_nvme_attach_controller" 00:11:52.040 }' 00:11:52.040 [2024-11-06 13:41:32.901535] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:11:52.040 [2024-11-06 13:41:32.901598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69228 ] 00:11:52.298 [2024-11-06 13:41:33.055184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.298 [2024-11-06 13:41:33.120276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.557 Running I/O for 10 seconds... 00:11:54.431 7914.00 IOPS, 61.83 MiB/s [2024-11-06T13:41:36.740Z] 7964.50 IOPS, 62.22 MiB/s [2024-11-06T13:41:37.677Z] 7950.33 IOPS, 62.11 MiB/s [2024-11-06T13:41:38.614Z] 7970.50 IOPS, 62.27 MiB/s [2024-11-06T13:41:39.551Z] 7963.00 IOPS, 62.21 MiB/s [2024-11-06T13:41:40.489Z] 7963.00 IOPS, 62.21 MiB/s [2024-11-06T13:41:41.426Z] 7964.43 IOPS, 62.22 MiB/s [2024-11-06T13:41:42.363Z] 7951.88 IOPS, 62.12 MiB/s [2024-11-06T13:41:43.748Z] 7950.00 IOPS, 62.11 MiB/s [2024-11-06T13:41:43.748Z] 7947.90 IOPS, 62.09 MiB/s 00:12:02.815 Latency(us) 00:12:02.815 [2024-11-06T13:41:43.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.815 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:02.815 Verification LBA range: start 0x0 length 0x1000 00:12:02.815 Nvme1n1 : 10.01 7952.52 62.13 0.00 0.00 16050.88 276.36 23161.32 00:12:02.815 [2024-11-06T13:41:43.748Z] =================================================================================================================== 00:12:02.815 [2024-11-06T13:41:43.748Z] Total : 7952.52 62.13 0.00 0.00 16050.88 276.36 23161.32 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69345 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:02.815 { 00:12:02.815 "params": { 00:12:02.815 "name": "Nvme$subsystem", 00:12:02.815 "trtype": "$TEST_TRANSPORT", 00:12:02.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:02.815 "adrfam": "ipv4", 00:12:02.815 "trsvcid": "$NVMF_PORT", 00:12:02.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:02.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:02.815 "hdgst": ${hdgst:-false}, 00:12:02.815 "ddgst": ${ddgst:-false} 00:12:02.815 }, 00:12:02.815 "method": "bdev_nvme_attach_controller" 00:12:02.815 } 00:12:02.815 EOF 00:12:02.815 )") 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:02.815 [2024-11-06 13:41:43.621982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.815 [2024-11-06 13:41:43.622187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.815 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:02.815 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:02.815 "params": { 00:12:02.815 "name": "Nvme1", 00:12:02.815 "trtype": "tcp", 00:12:02.815 "traddr": "10.0.0.3", 00:12:02.815 "adrfam": "ipv4", 00:12:02.815 "trsvcid": "4420", 00:12:02.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:02.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:02.815 "hdgst": false, 00:12:02.815 "ddgst": false 00:12:02.815 }, 00:12:02.815 "method": "bdev_nvme_attach_controller" 00:12:02.815 }' 00:12:02.815 [2024-11-06 13:41:43.637909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.815 [2024-11-06 13:41:43.637935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.815 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.815 [2024-11-06 13:41:43.653854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.815 [2024-11-06 13:41:43.653878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.815 [2024-11-06 13:41:43.656934] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:12:02.815 [2024-11-06 13:41:43.656999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69345 ] 00:12:02.815 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.815 [2024-11-06 13:41:43.665833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.815 [2024-11-06 13:41:43.665970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.816 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.816 [2024-11-06 13:41:43.677824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.816 [2024-11-06 13:41:43.677943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.816 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.816 [2024-11-06 13:41:43.689806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.816 [2024-11-06 13:41:43.689970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.816 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.816 [2024-11-06 13:41:43.705778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.816 [2024-11-06 13:41:43.705902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.816 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.816 [2024-11-06 13:41:43.721756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.816 [2024-11-06 13:41:43.721870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.816 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.816 [2024-11-06 13:41:43.737732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.816 [2024-11-06 13:41:43.737832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.816 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.753710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.753810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.769690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.769790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.785665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.785684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.801645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.801666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.809501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.075 [2024-11-06 13:41:43.813633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.813655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.829607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.829632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.841585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.841606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.853573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.853593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.865549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.865568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.877544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.877563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 [2024-11-06 13:41:43.879455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.075 [2024-11-06 13:41:43.889538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.075 [2024-11-06 13:41:43.889560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.075 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.076 [2024-11-06 13:41:43.901516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.076 [2024-11-06 13:41:43.901541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.076 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.076 [2024-11-06 13:41:43.913491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.076 [2024-11-06 13:41:43.913509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.076 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.076 [2024-11-06 13:41:43.929479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.076 [2024-11-06 13:41:43.929505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.076 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.076 [2024-11-06 13:41:43.945442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.076 [2024-11-06 13:41:43.945466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.076 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.076 [2024-11-06 13:41:43.961417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.076 [2024-11-06 13:41:43.961439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.076 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.076 [2024-11-06 13:41:43.977397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.076 [2024-11-06 13:41:43.977417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.076 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.076 [2024-11-06 13:41:43.993372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.076 [2024-11-06 13:41:43.993392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.076 2024/11/06 13:41:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.009349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.009483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.021345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.021367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.033337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.033366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.045318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.045344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.057304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.057329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.069291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.069332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.085274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.085302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 Running I/O for 5 seconds... 00:12:03.335 [2024-11-06 13:41:44.097264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.097289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.113745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.113777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.129416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.129448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.144964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.144996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.159980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.160010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.174430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.174462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.185994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.186026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.202072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.202103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.217676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.217709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.232974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.233004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.335 [2024-11-06 13:41:44.253159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.335 [2024-11-06 13:41:44.253190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.335 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.268346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.268376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.284611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.284642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.295881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.296098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.311828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.311998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.327195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.327324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.343321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.343354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.358942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.358973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.370463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.370592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.386340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.386373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.405533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.405666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.419891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.419920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.435070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.435102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.451211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.451241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.462427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.462457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.477767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.477798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.493105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.493137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.508125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.508252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.595 [2024-11-06 13:41:44.523406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.595 [2024-11-06 13:41:44.523564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.595 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.538188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.538311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.552318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.552348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.570426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.570456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.585522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.585664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.596584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.596617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.612247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.612278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.627612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.627644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.642000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.642032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.653918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.653954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.668895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.668924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.684880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.684910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.699114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.699146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.717181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.717212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.732773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.732805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.748380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.748413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.763283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.763313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.855 [2024-11-06 13:41:44.779439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.855 [2024-11-06 13:41:44.779469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.855 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.795667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.795701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.812092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.812123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.828623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.828653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.843989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.844019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.858342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.858373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.869843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.869881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.885333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.885364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.901145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.901175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.916687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.916718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.932084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.932115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.947479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.947512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.964391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.964424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.980536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.980569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:44.992434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:44.992468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.115 2024/11/06 13:41:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.115 [2024-11-06 13:41:45.007864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.115 [2024-11-06 13:41:45.007940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.116 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.116 [2024-11-06 13:41:45.027375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.116 [2024-11-06 13:41:45.027434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.116 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.116 [2024-11-06 13:41:45.045812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.116 [2024-11-06 13:41:45.045885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.374 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.374 [2024-11-06 13:41:45.064392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.064450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.083634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.083703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 15020.00 IOPS, 117.34 MiB/s [2024-11-06T13:41:45.308Z] [2024-11-06 13:41:45.102347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.102395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.121194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.121247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.139406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.139462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.158573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.158631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.177188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.177241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.195454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.195497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.210596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.210645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.230169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.230224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.248818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.248888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.268124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.268182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.288048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.288107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.375 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.375 [2024-11-06 13:41:45.306757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.375 [2024-11-06 13:41:45.306805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.634 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.634 [2024-11-06 13:41:45.325292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.634 [2024-11-06 13:41:45.325333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.634 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.634 [2024-11-06 13:41:45.341418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.634 [2024-11-06 13:41:45.341461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.634 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.634 [2024-11-06 13:41:45.355027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.634 [2024-11-06 13:41:45.355064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.634 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.634 [2024-11-06 13:41:45.371247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.634 [2024-11-06 13:41:45.371283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.634 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.634 [2024-11-06 13:41:45.390312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.634 [2024-11-06 13:41:45.390345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.634 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.634 [2024-11-06 13:41:45.405275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.634 [2024-11-06 13:41:45.405308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.634 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.634 [2024-11-06 13:41:45.425200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.634 [2024-11-06 13:41:45.425233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.634 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.634 [2024-11-06 13:41:45.444041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.634 [2024-11-06 13:41:45.444074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.635 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.635 [2024-11-06 13:41:45.458884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.635 [2024-11-06 13:41:45.458924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.635 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.635 [2024-11-06 13:41:45.471416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.635 [2024-11-06 13:41:45.471558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.635 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.635 [2024-11-06 13:41:45.487564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.635 [2024-11-06 13:41:45.487598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.635 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.635 [2024-11-06 13:41:45.503599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.635 [2024-11-06 13:41:45.503632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.635 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.635 [2024-11-06 13:41:45.519289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.635 [2024-11-06 13:41:45.519322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.635 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.635 [2024-11-06 13:41:45.537602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.635 [2024-11-06 13:41:45.537636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.635 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.635 [2024-11-06 13:41:45.556639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.635 [2024-11-06 13:41:45.556676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.635 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.571551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.571589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.588599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.588634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.608133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.608168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.624131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.624166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.639628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.639663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.655491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.655526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.673930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.673964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.685662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.685822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.704245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.704277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.719335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.719373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.738458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.738494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.753913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.753956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.772869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.772935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.791620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.791676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:04.894 [2024-11-06 13:41:45.810214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.894 [2024-11-06 13:41:45.810268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.894 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.154 [2024-11-06 13:41:45.828654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.154 [2024-11-06 13:41:45.828711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.154 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.154 [2024-11-06 13:41:45.847333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.154 [2024-11-06 13:41:45.847385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.862250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.862293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.877438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.877575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.892146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.892275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.907731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.907873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.923092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.923215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.939042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.939072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.956959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.956985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.975079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.975233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:45.989325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:45.989449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:46.007813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:46.007971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:46.023687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:46.023816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:46.043470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:46.043503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:46.059040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:46.059070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.155 [2024-11-06 13:41:46.073757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.155 [2024-11-06 13:41:46.073788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.155 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.415 [2024-11-06 13:41:46.088138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.415 [2024-11-06 13:41:46.088169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.415 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.415 15017.00 IOPS, 117.32 MiB/s [2024-11-06T13:41:46.348Z] [2024-11-06 13:41:46.104184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.415 [2024-11-06 13:41:46.104215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.415 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.415 [2024-11-06 13:41:46.121641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.415 [2024-11-06 13:41:46.121673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.415 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.415 [2024-11-06 13:41:46.136914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.415 [2024-11-06 13:41:46.136944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.415 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.415 [2024-11-06 13:41:46.152654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.415 [2024-11-06 13:41:46.152685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.415 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.415 [2024-11-06 13:41:46.168072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.415 [2024-11-06 13:41:46.168102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.415 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.415 [2024-11-06 13:41:46.182976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.415 [2024-11-06 13:41:46.183006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.415 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.415 [2024-11-06 13:41:46.201197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.415 [2024-11-06 13:41:46.201229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.216411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.216442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.231831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.231875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.246219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.246248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.261612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.261642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.277469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.277499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.292353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.292380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.307312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.307341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.323211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.323238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.416 [2024-11-06 13:41:46.337815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.416 [2024-11-06 13:41:46.337843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.416 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.353777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.353805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.368700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.368729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.384704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.384731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.401072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.401100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.416972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.416999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.431416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.431445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.449486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.449516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.464616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.464648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.480192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.480224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.494370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.494404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.509066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.509100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.525385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.525420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.541955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.541986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.557843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.557884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.572899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.572928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.588371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.588400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.676 [2024-11-06 13:41:46.602246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.676 [2024-11-06 13:41:46.602275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.676 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-06 13:41:46.620480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-06 13:41:46.620511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-06 13:41:46.635753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-06 13:41:46.635784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-06 13:41:46.656053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-06 13:41:46.656082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-06 13:41:46.671322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-06 13:41:46.671352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-06 13:41:46.687073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-06 13:41:46.687106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-06 13:41:46.705500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-06 13:41:46.705550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-06 13:41:46.724141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-06 13:41:46.724192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-06 13:41:46.742921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-06 13:41:46.742969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-06 13:41:46.761554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-06 13:41:46.761605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-06 13:41:46.779523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-06 13:41:46.779559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-06 13:41:46.797883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-06 13:41:46.797913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-06 13:41:46.813514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-06 13:41:46.813545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-06 13:41:46.829202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-06 13:41:46.829232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-06 13:41:46.845563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-06 13:41:46.845592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-06 13:41:46.862308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-06 13:41:46.862337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.194 [2024-11-06 13:41:46.878911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.194 [2024-11-06 13:41:46.878938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.194 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.194 [2024-11-06 13:41:46.894407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.194 [2024-11-06 13:41:46.894434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.194 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.194 [2024-11-06 13:41:46.908522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.194 [2024-11-06 13:41:46.908549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.194 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.194 [2024-11-06 13:41:46.923424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.194 [2024-11-06 13:41:46.923451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.194 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.194 [2024-11-06 13:41:46.935118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.194 [2024-11-06 13:41:46.935143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.194 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.194 [2024-11-06 13:41:46.950186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.194 [2024-11-06 13:41:46.950213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.194 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.194 [2024-11-06 13:41:46.970065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.194 [2024-11-06 13:41:46.970092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 [2024-11-06 13:41:46.985725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:46.985754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 [2024-11-06 13:41:47.001771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:47.001799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 [2024-11-06 13:41:47.018136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:47.018164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 [2024-11-06 13:41:47.033448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:47.033475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 [2024-11-06 13:41:47.047703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:47.047732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 [2024-11-06 13:41:47.062959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:47.062986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 [2024-11-06 13:41:47.078519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:47.078544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 15092.00 IOPS, 117.91 MiB/s [2024-11-06T13:41:47.128Z] [2024-11-06 13:41:47.093849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:47.093899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.195 [2024-11-06 13:41:47.109160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.195 [2024-11-06 13:41:47.109186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.195 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.127713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.127741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.142871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.142897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.158185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.158212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.172208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.172234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.186989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.187014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.202338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.202364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.216404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.216431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.231377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.231406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.242670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.242698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.258190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.258216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.454 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.454 [2024-11-06 13:41:47.273694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.454 [2024-11-06 13:41:47.273731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.455 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.455 [2024-11-06 13:41:47.291988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.455 [2024-11-06 13:41:47.292018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.455 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.455 [2024-11-06 13:41:47.310591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.455 [2024-11-06 13:41:47.310622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.455 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.455 [2024-11-06 13:41:47.325743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.455 [2024-11-06 13:41:47.325772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.455 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.455 [2024-11-06 13:41:47.341189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.455 [2024-11-06 13:41:47.341218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.455 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.455 [2024-11-06 13:41:47.356252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.455 [2024-11-06 13:41:47.356278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.455 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.455 [2024-11-06 13:41:47.372021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.455 [2024-11-06 13:41:47.372047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.455 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.386498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.386524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.402147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.402174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.416753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.416780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.432203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.432230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.446642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.446670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.461344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.461481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.473074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.473104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.488515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.488545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.504377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.504407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.518890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.518921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.530343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.530373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.546091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.546121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.561296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.561326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.575589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.575621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.591272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.591303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.607461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.607493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.618672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.618703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.715 [2024-11-06 13:41:47.634809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.715 [2024-11-06 13:41:47.634843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.715 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.650349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.650383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.666537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.666570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.682026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.682056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.697996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.698037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.713606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.713638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.729552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.729586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.744254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.744286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.760371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.760403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.775718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.775751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.791579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.791609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.806134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.806172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.820109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.820139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.835140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.835170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.850877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.850908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.865873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.865902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.881974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.882004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.975 [2024-11-06 13:41:47.898562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.975 [2024-11-06 13:41:47.898594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.975 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:47.914134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:47.914164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:47.929356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:47.929387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:47.945229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:47.945259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:47.960800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:47.960832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:47.975997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:47.976028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:47.989813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:47.989846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:48.005795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:48.005828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:48.025290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:48.025326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:48.041002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:48.041034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:48.057526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:48.057559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:48.074102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:48.074136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 15117.75 IOPS, 118.11 MiB/s [2024-11-06T13:41:48.168Z] [2024-11-06 13:41:48.093127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:48.093159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.235 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.235 [2024-11-06 13:41:48.108285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.235 [2024-11-06 13:41:48.108317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.236 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.236 [2024-11-06 13:41:48.119756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.236 [2024-11-06 13:41:48.119789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.236 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.236 [2024-11-06 13:41:48.135291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.236 [2024-11-06 13:41:48.135320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.236 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.236 [2024-11-06 13:41:48.151340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.236 [2024-11-06 13:41:48.151371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.236 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.236 [2024-11-06 13:41:48.167459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.236 [2024-11-06 13:41:48.167492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.495 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.495 [2024-11-06 13:41:48.183261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.495 [2024-11-06 13:41:48.183293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.495 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.495 [2024-11-06 13:41:48.198313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.495 [2024-11-06 13:41:48.198345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.212595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.212626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.229125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.229155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.245379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.245417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.263282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.263313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.277538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.277569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.289453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.289484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.305311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.305342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.324851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.324892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.340603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.340634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.355995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.356026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.370268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.370298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.385169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.385202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.405095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.405137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.496 [2024-11-06 13:41:48.423273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.496 [2024-11-06 13:41:48.423304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.496 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.438092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.438124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.454693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.454725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.470614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.470645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.485964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.485994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.500957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.500987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.514901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.514932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.530357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.530386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.546172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.546216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.561428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.561459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.576931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.576960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.594525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.594557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.609268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.609298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.625072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.625103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.640289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.640422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.657118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.657150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.756 [2024-11-06 13:41:48.672924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.756 [2024-11-06 13:41:48.672955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.756 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.016 [2024-11-06 13:41:48.688696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.016 [2024-11-06 13:41:48.688729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.016 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.016 [2024-11-06 13:41:48.703917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.016 [2024-11-06 13:41:48.703946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.016 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.016 [2024-11-06 13:41:48.719494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.016 [2024-11-06 13:41:48.719631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.016 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.016 [2024-11-06 13:41:48.734417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.016 [2024-11-06 13:41:48.734559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.016 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.016 [2024-11-06 13:41:48.754473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.016 [2024-11-06 13:41:48.754511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.765994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.766024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.781648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.781778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.797846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.797985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.812097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.812129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.828023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.828055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.844093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.844125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.859418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.859452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.874846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.874891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.889822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.889971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.906001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.906032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.921713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.921856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.017 [2024-11-06 13:41:48.937006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.017 [2024-11-06 13:41:48.937037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.017 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:48.952772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:48.952931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:48.968269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:48.968401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:48.982519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:48.982552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:48.998035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:48.998065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.014217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.014256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.025120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.025149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.040728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.040761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.055715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.055847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.073651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.073684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.088501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.088533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 15146.60 IOPS, 118.33 MiB/s [2024-11-06T13:41:49.210Z] 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 00:12:08.277 Latency(us) 00:12:08.277 [2024-11-06T13:41:49.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.277 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:08.277 Nvme1n1 : 5.01 15150.46 118.36 0.00 0.00 8440.86 4000.59 17897.38 00:12:08.277 [2024-11-06T13:41:49.210Z] =================================================================================================================== 00:12:08.277 [2024-11-06T13:41:49.210Z] Total : 15150.46 118.36 0.00 0.00 8440.86 4000.59 17897.38 00:12:08.277 [2024-11-06 13:41:49.099055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.099197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.111019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.111048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.127007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.127036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.142962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.143125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.158944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.158967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.277 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.277 [2024-11-06 13:41:49.174928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.277 [2024-11-06 13:41:49.174950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.278 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.278 [2024-11-06 13:41:49.190909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.278 [2024-11-06 13:41:49.191056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.278 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.278 [2024-11-06 13:41:49.206902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.278 [2024-11-06 13:41:49.206925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.222880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.222903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.238887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.238910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.254878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.254902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.270853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.270881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.286878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.286902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.302865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.303010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.314840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.314873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.330851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.330884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.346841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.346871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 [2024-11-06 13:41:49.362823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.537 [2024-11-06 13:41:49.362843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.537 2024/11/06 13:41:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.537 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69345) - No such process 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69345 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.537 delay0 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.537 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:08.796 [2024-11-06 13:41:49.594089] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:15.372 Initializing NVMe Controllers 00:12:15.372 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:15.372 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:15.372 Initialization complete. Launching workers. 00:12:15.372 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 209 00:12:15.372 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 496, failed to submit 33 00:12:15.372 success 314, unsuccessful 182, failed 0 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.372 rmmod nvme_tcp 00:12:15.372 rmmod nvme_fabrics 00:12:15.372 rmmod nvme_keyring 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69177 ']' 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69177 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 69177 ']' 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 69177 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69177 00:12:15.372 killing process with pid 69177 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69177' 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 69177 00:12:15.372 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 69177 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.372 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:15.631 00:12:15.631 real 0m25.577s 00:12:15.631 user 0m40.245s 00:12:15.631 sys 0m8.401s 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:15.631 ************************************ 00:12:15.631 END TEST nvmf_zcopy 00:12:15.631 ************************************ 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:15.631 ************************************ 00:12:15.631 START TEST nvmf_nmic 00:12:15.631 ************************************ 00:12:15.631 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:15.891 * Looking for test storage... 00:12:15.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:15.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.891 --rc genhtml_branch_coverage=1 00:12:15.891 --rc genhtml_function_coverage=1 00:12:15.891 --rc genhtml_legend=1 00:12:15.891 --rc geninfo_all_blocks=1 00:12:15.891 --rc geninfo_unexecuted_blocks=1 00:12:15.891 00:12:15.891 ' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:15.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.891 --rc genhtml_branch_coverage=1 00:12:15.891 --rc genhtml_function_coverage=1 00:12:15.891 --rc genhtml_legend=1 00:12:15.891 --rc geninfo_all_blocks=1 00:12:15.891 --rc geninfo_unexecuted_blocks=1 00:12:15.891 00:12:15.891 ' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:15.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.891 --rc genhtml_branch_coverage=1 00:12:15.891 --rc genhtml_function_coverage=1 00:12:15.891 --rc genhtml_legend=1 00:12:15.891 --rc geninfo_all_blocks=1 00:12:15.891 --rc geninfo_unexecuted_blocks=1 00:12:15.891 00:12:15.891 ' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:15.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.891 --rc genhtml_branch_coverage=1 00:12:15.891 --rc genhtml_function_coverage=1 00:12:15.891 --rc genhtml_legend=1 00:12:15.891 --rc geninfo_all_blocks=1 00:12:15.891 --rc geninfo_unexecuted_blocks=1 00:12:15.891 00:12:15.891 ' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.891 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:15.892 Cannot find device "nvmf_init_br" 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:15.892 Cannot find device "nvmf_init_br2" 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:15.892 Cannot find device "nvmf_tgt_br" 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.892 Cannot find device "nvmf_tgt_br2" 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:15.892 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:16.286 Cannot find device "nvmf_init_br" 00:12:16.286 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:16.286 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:16.286 Cannot find device "nvmf_init_br2" 00:12:16.286 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:16.286 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:16.286 Cannot find device "nvmf_tgt_br" 00:12:16.286 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:16.286 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:16.286 Cannot find device "nvmf_tgt_br2" 00:12:16.286 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:16.286 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:16.287 Cannot find device "nvmf_br" 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:16.287 Cannot find device "nvmf_init_if" 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:16.287 Cannot find device "nvmf_init_if2" 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.287 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:16.287 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:16.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.140 ms 00:12:16.550 00:12:16.550 --- 10.0.0.3 ping statistics --- 00:12:16.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.550 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:16.550 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:16.550 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:12:16.550 00:12:16.550 --- 10.0.0.4 ping statistics --- 00:12:16.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.550 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:12:16.550 00:12:16.550 --- 10.0.0.1 ping statistics --- 00:12:16.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.550 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:16.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:12:16.550 00:12:16.550 --- 10.0.0.2 ping statistics --- 00:12:16.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.550 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69730 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69730 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 69730 ']' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:16.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:16.550 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.550 [2024-11-06 13:41:57.403540] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:12:16.550 [2024-11-06 13:41:57.403616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.809 [2024-11-06 13:41:57.551904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.809 [2024-11-06 13:41:57.627261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.809 [2024-11-06 13:41:57.627323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.809 [2024-11-06 13:41:57.627335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.809 [2024-11-06 13:41:57.627344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.809 [2024-11-06 13:41:57.627351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.810 [2024-11-06 13:41:57.628721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.810 [2024-11-06 13:41:57.628924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.810 [2024-11-06 13:41:57.628990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.810 [2024-11-06 13:41:57.628995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 [2024-11-06 13:41:58.408795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 Malloc0 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 [2024-11-06 13:41:58.490666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.745 test case1: single bdev can't be used in multiple subsystems 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:17.745 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.746 [2024-11-06 13:41:58.526417] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:17.746 [2024-11-06 13:41:58.526464] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:17.746 [2024-11-06 13:41:58.526475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.746 2024/11/06 13:41:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:17.746 request: 00:12:17.746 { 00:12:17.746 "method": "nvmf_subsystem_add_ns", 00:12:17.746 "params": { 00:12:17.746 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:17.746 "namespace": { 00:12:17.746 "bdev_name": "Malloc0", 00:12:17.746 "no_auto_visible": false 00:12:17.746 } 00:12:17.746 } 00:12:17.746 } 00:12:17.746 Got JSON-RPC error response 00:12:17.746 GoRPCClient: error on JSON-RPC call 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:17.746 Adding namespace failed - expected result. 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:17.746 test case2: host connect to nvmf target in multiple paths 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:17.746 [2024-11-06 13:41:58.546509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.746 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:18.005 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:18.005 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.005 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:12:18.005 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.005 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:18.005 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:12:20.540 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:20.540 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:20.540 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.540 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:20.540 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.540 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:12:20.540 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:20.540 [global] 00:12:20.540 thread=1 00:12:20.540 invalidate=1 00:12:20.540 rw=write 00:12:20.540 time_based=1 00:12:20.540 runtime=1 00:12:20.540 ioengine=libaio 00:12:20.540 direct=1 00:12:20.540 bs=4096 00:12:20.540 iodepth=1 00:12:20.540 norandommap=0 00:12:20.540 numjobs=1 00:12:20.540 00:12:20.540 verify_dump=1 00:12:20.540 verify_backlog=512 00:12:20.540 verify_state_save=0 00:12:20.540 do_verify=1 00:12:20.540 verify=crc32c-intel 00:12:20.540 [job0] 00:12:20.540 filename=/dev/nvme0n1 00:12:20.540 Could not set queue depth (nvme0n1) 00:12:20.540 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.540 fio-3.35 00:12:20.540 Starting 1 thread 00:12:21.516 00:12:21.516 job0: (groupid=0, jobs=1): err= 0: pid=69836: Wed Nov 6 13:42:02 2024 00:12:21.516 read: IOPS=4819, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1001msec) 00:12:21.516 slat (nsec): min=8206, max=25954, avg=8922.49, stdev=1467.95 00:12:21.516 clat (usec): min=88, max=320, avg=102.80, stdev= 9.28 00:12:21.516 lat (usec): min=96, max=328, avg=111.72, stdev= 9.49 00:12:21.516 clat percentiles (usec): 00:12:21.516 | 1.00th=[ 92], 5.00th=[ 95], 10.00th=[ 96], 20.00th=[ 98], 00:12:21.516 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 103], 00:12:21.516 | 70.00th=[ 105], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 114], 00:12:21.516 | 99.00th=[ 122], 99.50th=[ 128], 99.90th=[ 249], 99.95th=[ 293], 00:12:21.516 | 99.99th=[ 322] 00:12:21.516 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:12:21.516 slat (usec): min=8, max=165, avg=13.88, stdev= 5.58 00:12:21.516 clat (usec): min=63, max=798, avg=74.39, stdev=16.01 00:12:21.516 lat (usec): min=75, max=814, avg=88.26, stdev=17.92 00:12:21.516 clat percentiles (usec): 00:12:21.516 | 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 70], 00:12:21.516 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 75], 00:12:21.516 | 70.00th=[ 76], 80.00th=[ 78], 90.00th=[ 82], 95.00th=[ 85], 00:12:21.516 | 99.00th=[ 94], 99.50th=[ 110], 99.90th=[ 237], 99.95th=[ 453], 00:12:21.516 | 99.99th=[ 799] 00:12:21.516 bw ( KiB/s): min=20480, max=20480, per=100.00%, avg=20480.00, stdev= 0.00, samples=1 00:12:21.516 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:21.516 lat (usec) : 100=68.69%, 250=31.22%, 500=0.06%, 750=0.01%, 1000=0.01% 00:12:21.516 cpu : usr=2.30%, sys=8.90%, ctx=9944, majf=0, minf=5 00:12:21.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.516 issued rwts: total=4824,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.516 00:12:21.516 Run status group 0 (all jobs): 00:12:21.516 READ: bw=18.8MiB/s (19.7MB/s), 18.8MiB/s-18.8MiB/s (19.7MB/s-19.7MB/s), io=18.8MiB (19.8MB), run=1001-1001msec 00:12:21.516 WRITE: bw=20.0MiB/s (20.9MB/s), 20.0MiB/s-20.0MiB/s (20.9MB/s-20.9MB/s), io=20.0MiB (21.0MB), run=1001-1001msec 00:12:21.516 00:12:21.516 Disk stats (read/write): 00:12:21.516 nvme0n1: ios=4360/4608, merge=0/0, ticks=456/371, in_queue=827, util=91.27% 00:12:21.516 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.776 rmmod nvme_tcp 00:12:21.776 rmmod nvme_fabrics 00:12:21.776 rmmod nvme_keyring 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69730 ']' 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69730 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 69730 ']' 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 69730 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69730 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:21.776 killing process with pid 69730 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69730' 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 69730 00:12:21.776 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 69730 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:22.035 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:22.294 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.294 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:22.553 00:12:22.553 real 0m6.814s 00:12:22.553 user 0m20.800s 00:12:22.553 sys 0m2.100s 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:22.553 ************************************ 00:12:22.553 END TEST nvmf_nmic 00:12:22.553 ************************************ 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:22.553 ************************************ 00:12:22.553 START TEST nvmf_fio_target 00:12:22.553 ************************************ 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:22.553 * Looking for test storage... 00:12:22.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:12:22.553 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:22.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.812 --rc genhtml_branch_coverage=1 00:12:22.812 --rc genhtml_function_coverage=1 00:12:22.812 --rc genhtml_legend=1 00:12:22.812 --rc geninfo_all_blocks=1 00:12:22.812 --rc geninfo_unexecuted_blocks=1 00:12:22.812 00:12:22.812 ' 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:22.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.812 --rc genhtml_branch_coverage=1 00:12:22.812 --rc genhtml_function_coverage=1 00:12:22.812 --rc genhtml_legend=1 00:12:22.812 --rc geninfo_all_blocks=1 00:12:22.812 --rc geninfo_unexecuted_blocks=1 00:12:22.812 00:12:22.812 ' 00:12:22.812 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:22.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.812 --rc genhtml_branch_coverage=1 00:12:22.812 --rc genhtml_function_coverage=1 00:12:22.813 --rc genhtml_legend=1 00:12:22.813 --rc geninfo_all_blocks=1 00:12:22.813 --rc geninfo_unexecuted_blocks=1 00:12:22.813 00:12:22.813 ' 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:22.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.813 --rc genhtml_branch_coverage=1 00:12:22.813 --rc genhtml_function_coverage=1 00:12:22.813 --rc genhtml_legend=1 00:12:22.813 --rc geninfo_all_blocks=1 00:12:22.813 --rc geninfo_unexecuted_blocks=1 00:12:22.813 00:12:22.813 ' 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.813 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:22.813 Cannot find device "nvmf_init_br" 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:22.813 Cannot find device "nvmf_init_br2" 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:22.813 Cannot find device "nvmf_tgt_br" 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:22.813 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:22.813 Cannot find device "nvmf_tgt_br2" 00:12:22.814 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:22.814 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:22.814 Cannot find device "nvmf_init_br" 00:12:22.814 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:22.814 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:23.072 Cannot find device "nvmf_init_br2" 00:12:23.072 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:23.073 Cannot find device "nvmf_tgt_br" 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:23.073 Cannot find device "nvmf_tgt_br2" 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:23.073 Cannot find device "nvmf_br" 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:23.073 Cannot find device "nvmf_init_if" 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:23.073 Cannot find device "nvmf_init_if2" 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:23.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:23.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:23.073 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:23.331 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:23.331 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:23.331 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:23.331 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:23.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:23.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:12:23.332 00:12:23.332 --- 10.0.0.3 ping statistics --- 00:12:23.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.332 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:23.332 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:23.332 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:12:23.332 00:12:23.332 --- 10.0.0.4 ping statistics --- 00:12:23.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.332 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:23.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:12:23.332 00:12:23.332 --- 10.0.0.1 ping statistics --- 00:12:23.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.332 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:23.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:12:23.332 00:12:23.332 --- 10.0.0.2 ping statistics --- 00:12:23.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.332 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70078 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70078 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 70078 ']' 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:23.332 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.332 [2024-11-06 13:42:04.222949] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:12:23.332 [2024-11-06 13:42:04.223044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.591 [2024-11-06 13:42:04.378684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.591 [2024-11-06 13:42:04.456065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.591 [2024-11-06 13:42:04.456122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.591 [2024-11-06 13:42:04.456132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.591 [2024-11-06 13:42:04.456141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.591 [2024-11-06 13:42:04.456148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.591 [2024-11-06 13:42:04.457474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.591 [2024-11-06 13:42:04.457690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.591 [2024-11-06 13:42:04.457786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.591 [2024-11-06 13:42:04.457850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.528 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:24.528 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:12:24.528 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.528 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.528 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.528 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.528 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:24.528 [2024-11-06 13:42:05.427301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.786 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.044 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:25.044 13:42:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.303 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:25.303 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.561 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:25.561 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.820 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:25.820 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:26.079 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.337 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:26.337 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.597 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:26.597 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.855 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:26.855 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:27.113 13:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.372 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:27.372 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:27.633 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:27.633 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:27.633 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:27.903 [2024-11-06 13:42:08.769715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:27.903 13:42:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:28.161 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:28.419 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:28.678 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:28.678 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:12:28.678 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.678 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:12:28.678 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:12:28.678 13:42:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:12:30.581 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:30.581 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:30.581 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.581 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:12:30.581 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.581 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:12:30.581 13:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:30.581 [global] 00:12:30.581 thread=1 00:12:30.581 invalidate=1 00:12:30.581 rw=write 00:12:30.581 time_based=1 00:12:30.581 runtime=1 00:12:30.581 ioengine=libaio 00:12:30.581 direct=1 00:12:30.581 bs=4096 00:12:30.581 iodepth=1 00:12:30.581 norandommap=0 00:12:30.581 numjobs=1 00:12:30.581 00:12:30.581 verify_dump=1 00:12:30.581 verify_backlog=512 00:12:30.581 verify_state_save=0 00:12:30.581 do_verify=1 00:12:30.581 verify=crc32c-intel 00:12:30.581 [job0] 00:12:30.581 filename=/dev/nvme0n1 00:12:30.581 [job1] 00:12:30.581 filename=/dev/nvme0n2 00:12:30.869 [job2] 00:12:30.869 filename=/dev/nvme0n3 00:12:30.869 [job3] 00:12:30.869 filename=/dev/nvme0n4 00:12:30.869 Could not set queue depth (nvme0n1) 00:12:30.869 Could not set queue depth (nvme0n2) 00:12:30.869 Could not set queue depth (nvme0n3) 00:12:30.869 Could not set queue depth (nvme0n4) 00:12:30.869 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.869 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.869 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.869 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.869 fio-3.35 00:12:30.869 Starting 4 threads 00:12:32.247 00:12:32.247 job0: (groupid=0, jobs=1): err= 0: pid=70370: Wed Nov 6 13:42:12 2024 00:12:32.247 read: IOPS=2375, BW=9502KiB/s (9731kB/s)(9512KiB/1001msec) 00:12:32.247 slat (nsec): min=8246, max=45022, avg=10037.54, stdev=2899.97 00:12:32.247 clat (usec): min=125, max=2484, avg=217.80, stdev=64.31 00:12:32.247 lat (usec): min=134, max=2506, avg=227.84, stdev=64.72 00:12:32.247 clat percentiles (usec): 00:12:32.247 | 1.00th=[ 139], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 178], 00:12:32.247 | 30.00th=[ 194], 40.00th=[ 206], 50.00th=[ 219], 60.00th=[ 229], 00:12:32.247 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 285], 00:12:32.247 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 383], 99.95th=[ 1057], 00:12:32.247 | 99.99th=[ 2474] 00:12:32.247 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:32.247 slat (usec): min=12, max=153, avg=15.83, stdev= 7.06 00:12:32.247 clat (usec): min=92, max=1954, avg=161.07, stdev=50.03 00:12:32.247 lat (usec): min=106, max=1967, avg=176.90, stdev=51.44 00:12:32.247 clat percentiles (usec): 00:12:32.247 | 1.00th=[ 103], 5.00th=[ 113], 10.00th=[ 120], 20.00th=[ 130], 00:12:32.247 | 30.00th=[ 139], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 165], 00:12:32.247 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 223], 00:12:32.247 | 99.00th=[ 260], 99.50th=[ 277], 99.90th=[ 396], 99.95th=[ 449], 00:12:32.247 | 99.99th=[ 1958] 00:12:32.247 bw ( KiB/s): min=11192, max=11192, per=30.39%, avg=11192.00, stdev= 0.00, samples=1 00:12:32.247 iops : min= 2798, max= 2798, avg=2798.00, stdev= 0.00, samples=1 00:12:32.247 lat (usec) : 100=0.24%, 250=88.84%, 500=10.85% 00:12:32.247 lat (msec) : 2=0.04%, 4=0.02% 00:12:32.247 cpu : usr=0.90%, sys=5.40%, ctx=4939, majf=0, minf=13 00:12:32.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.247 issued rwts: total=2378,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.247 job1: (groupid=0, jobs=1): err= 0: pid=70371: Wed Nov 6 13:42:12 2024 00:12:32.247 read: IOPS=2345, BW=9380KiB/s (9605kB/s)(9380KiB/1000msec) 00:12:32.247 slat (nsec): min=8052, max=35563, avg=9745.09, stdev=2435.41 00:12:32.247 clat (usec): min=127, max=706, avg=218.95, stdev=41.49 00:12:32.247 lat (usec): min=137, max=721, avg=228.69, stdev=41.64 00:12:32.247 clat percentiles (usec): 00:12:32.247 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 182], 00:12:32.247 | 30.00th=[ 194], 40.00th=[ 206], 50.00th=[ 219], 60.00th=[ 229], 00:12:32.247 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 285], 00:12:32.247 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 416], 99.95th=[ 603], 00:12:32.247 | 99.99th=[ 709] 00:12:32.247 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:12:32.247 slat (usec): min=12, max=133, avg=16.54, stdev= 8.60 00:12:32.247 clat (usec): min=94, max=519, avg=162.53, stdev=32.73 00:12:32.247 lat (usec): min=107, max=532, avg=179.07, stdev=34.45 00:12:32.247 clat percentiles (usec): 00:12:32.247 | 1.00th=[ 109], 5.00th=[ 118], 10.00th=[ 125], 20.00th=[ 135], 00:12:32.247 | 30.00th=[ 143], 40.00th=[ 151], 50.00th=[ 159], 60.00th=[ 167], 00:12:32.247 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 219], 00:12:32.247 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 306], 99.95th=[ 355], 00:12:32.247 | 99.99th=[ 519] 00:12:32.247 bw ( KiB/s): min=11176, max=11176, per=30.35%, avg=11176.00, stdev= 0.00, samples=1 00:12:32.247 iops : min= 2794, max= 2794, avg=2794.00, stdev= 0.00, samples=1 00:12:32.247 lat (usec) : 100=0.02%, 250=89.03%, 500=10.89%, 750=0.06% 00:12:32.247 cpu : usr=1.20%, sys=5.20%, ctx=4905, majf=0, minf=8 00:12:32.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.247 issued rwts: total=2345,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.247 job2: (groupid=0, jobs=1): err= 0: pid=70372: Wed Nov 6 13:42:12 2024 00:12:32.247 read: IOPS=1472, BW=5890KiB/s (6031kB/s)(5896KiB/1001msec) 00:12:32.247 slat (usec): min=16, max=117, avg=31.03, stdev= 7.81 00:12:32.247 clat (usec): min=216, max=455, avg=317.32, stdev=39.10 00:12:32.247 lat (usec): min=244, max=520, avg=348.35, stdev=41.03 00:12:32.247 clat percentiles (usec): 00:12:32.247 | 1.00th=[ 243], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 281], 00:12:32.247 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 330], 00:12:32.247 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 383], 00:12:32.247 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 457], 99.95th=[ 457], 00:12:32.247 | 99.99th=[ 457] 00:12:32.247 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:32.247 slat (usec): min=21, max=278, avg=45.29, stdev=12.70 00:12:32.247 clat (usec): min=42, max=449, avg=265.33, stdev=39.08 00:12:32.247 lat (usec): min=199, max=505, avg=310.62, stdev=43.60 00:12:32.247 clat percentiles (usec): 00:12:32.247 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 229], 00:12:32.247 | 30.00th=[ 243], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:12:32.247 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:12:32.247 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 392], 99.95th=[ 449], 00:12:32.247 | 99.99th=[ 449] 00:12:32.247 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:12:32.248 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:32.248 lat (usec) : 50=0.03%, 250=19.44%, 500=80.53% 00:12:32.248 cpu : usr=2.50%, sys=9.00%, ctx=3010, majf=0, minf=16 00:12:32.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.248 issued rwts: total=1474,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.248 job3: (groupid=0, jobs=1): err= 0: pid=70373: Wed Nov 6 13:42:12 2024 00:12:32.248 read: IOPS=2260, BW=9043KiB/s (9260kB/s)(9052KiB/1001msec) 00:12:32.248 slat (usec): min=8, max=113, avg=10.67, stdev= 4.95 00:12:32.248 clat (usec): min=132, max=524, avg=219.59, stdev=38.29 00:12:32.248 lat (usec): min=141, max=533, avg=230.26, stdev=38.71 00:12:32.248 clat percentiles (usec): 00:12:32.248 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 184], 00:12:32.248 | 30.00th=[ 198], 40.00th=[ 208], 50.00th=[ 221], 60.00th=[ 229], 00:12:32.248 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 285], 00:12:32.248 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 388], 99.95th=[ 506], 00:12:32.248 | 99.99th=[ 529] 00:12:32.248 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:32.248 slat (nsec): min=11842, max=79030, avg=15858.13, stdev=5682.39 00:12:32.248 clat (usec): min=87, max=6116, avg=169.16, stdev=169.21 00:12:32.248 lat (usec): min=100, max=6129, avg=185.02, stdev=169.50 00:12:32.248 clat percentiles (usec): 00:12:32.248 | 1.00th=[ 104], 5.00th=[ 119], 10.00th=[ 127], 20.00th=[ 137], 00:12:32.248 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 167], 00:12:32.248 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 217], 00:12:32.248 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 3720], 99.95th=[ 3982], 00:12:32.248 | 99.99th=[ 6128] 00:12:32.248 bw ( KiB/s): min=10624, max=10624, per=28.85%, avg=10624.00, stdev= 0.00, samples=1 00:12:32.248 iops : min= 2656, max= 2656, avg=2656.00, stdev= 0.00, samples=1 00:12:32.248 lat (usec) : 100=0.27%, 250=89.34%, 500=10.22%, 750=0.04% 00:12:32.248 lat (msec) : 2=0.04%, 4=0.06%, 10=0.02% 00:12:32.248 cpu : usr=0.90%, sys=5.50%, ctx=4824, majf=0, minf=13 00:12:32.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.248 issued rwts: total=2263,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.248 00:12:32.248 Run status group 0 (all jobs): 00:12:32.248 READ: bw=33.0MiB/s (34.6MB/s), 5890KiB/s-9502KiB/s (6031kB/s-9731kB/s), io=33.0MiB (34.7MB), run=1000-1001msec 00:12:32.248 WRITE: bw=36.0MiB/s (37.7MB/s), 6138KiB/s-10.0MiB/s (6285kB/s-10.5MB/s), io=36.0MiB (37.7MB), run=1000-1001msec 00:12:32.248 00:12:32.248 Disk stats (read/write): 00:12:32.248 nvme0n1: ios=2098/2195, merge=0/0, ticks=517/381, in_queue=898, util=93.18% 00:12:32.248 nvme0n2: ios=2097/2160, merge=0/0, ticks=520/372, in_queue=892, util=93.72% 00:12:32.248 nvme0n3: ios=1080/1536, merge=0/0, ticks=342/433, in_queue=775, util=89.36% 00:12:32.248 nvme0n4: ios=2054/2051, merge=0/0, ticks=462/349, in_queue=811, util=88.89% 00:12:32.248 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:32.248 [global] 00:12:32.248 thread=1 00:12:32.248 invalidate=1 00:12:32.248 rw=randwrite 00:12:32.248 time_based=1 00:12:32.248 runtime=1 00:12:32.248 ioengine=libaio 00:12:32.248 direct=1 00:12:32.248 bs=4096 00:12:32.248 iodepth=1 00:12:32.248 norandommap=0 00:12:32.248 numjobs=1 00:12:32.248 00:12:32.248 verify_dump=1 00:12:32.248 verify_backlog=512 00:12:32.248 verify_state_save=0 00:12:32.248 do_verify=1 00:12:32.248 verify=crc32c-intel 00:12:32.248 [job0] 00:12:32.248 filename=/dev/nvme0n1 00:12:32.248 [job1] 00:12:32.248 filename=/dev/nvme0n2 00:12:32.248 [job2] 00:12:32.248 filename=/dev/nvme0n3 00:12:32.248 [job3] 00:12:32.248 filename=/dev/nvme0n4 00:12:32.248 Could not set queue depth (nvme0n1) 00:12:32.248 Could not set queue depth (nvme0n2) 00:12:32.248 Could not set queue depth (nvme0n3) 00:12:32.248 Could not set queue depth (nvme0n4) 00:12:32.248 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.248 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.248 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.248 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.248 fio-3.35 00:12:32.248 Starting 4 threads 00:12:33.625 00:12:33.625 job0: (groupid=0, jobs=1): err= 0: pid=70432: Wed Nov 6 13:42:14 2024 00:12:33.625 read: IOPS=2859, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:12:33.625 slat (nsec): min=8057, max=71026, avg=9531.56, stdev=3010.29 00:12:33.625 clat (usec): min=117, max=1862, avg=174.96, stdev=48.11 00:12:33.625 lat (usec): min=125, max=1871, avg=184.50, stdev=48.30 00:12:33.625 clat percentiles (usec): 00:12:33.625 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:12:33.625 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:12:33.625 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 221], 00:12:33.625 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 685], 99.95th=[ 1418], 00:12:33.625 | 99.99th=[ 1860] 00:12:33.625 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:33.625 slat (usec): min=11, max=135, avg=15.60, stdev= 8.13 00:12:33.625 clat (usec): min=81, max=1827, avg=136.16, stdev=41.15 00:12:33.625 lat (usec): min=92, max=1840, avg=151.76, stdev=42.91 00:12:33.625 clat percentiles (usec): 00:12:33.625 | 1.00th=[ 93], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 113], 00:12:33.625 | 30.00th=[ 118], 40.00th=[ 125], 50.00th=[ 133], 60.00th=[ 139], 00:12:33.625 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 172], 95.00th=[ 188], 00:12:33.625 | 99.00th=[ 221], 99.50th=[ 237], 99.90th=[ 273], 99.95th=[ 330], 00:12:33.625 | 99.99th=[ 1827] 00:12:33.625 bw ( KiB/s): min=12288, max=12288, per=37.54%, avg=12288.00, stdev= 0.00, samples=1 00:12:33.625 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:33.625 lat (usec) : 100=2.29%, 250=97.15%, 500=0.49%, 750=0.02% 00:12:33.625 lat (msec) : 2=0.05% 00:12:33.625 cpu : usr=1.70%, sys=5.60%, ctx=5936, majf=0, minf=13 00:12:33.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.625 issued rwts: total=2862,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.625 job1: (groupid=0, jobs=1): err= 0: pid=70433: Wed Nov 6 13:42:14 2024 00:12:33.625 read: IOPS=1172, BW=4691KiB/s (4804kB/s)(4696KiB/1001msec) 00:12:33.625 slat (usec): min=21, max=654, avg=33.19, stdev=19.36 00:12:33.625 clat (usec): min=131, max=993, avg=416.41, stdev=81.47 00:12:33.625 lat (usec): min=177, max=1030, avg=449.60, stdev=81.89 00:12:33.625 clat percentiles (usec): 00:12:33.625 | 1.00th=[ 206], 5.00th=[ 265], 10.00th=[ 293], 20.00th=[ 351], 00:12:33.625 | 30.00th=[ 392], 40.00th=[ 412], 50.00th=[ 429], 60.00th=[ 445], 00:12:33.625 | 70.00th=[ 461], 80.00th=[ 482], 90.00th=[ 506], 95.00th=[ 529], 00:12:33.625 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 660], 99.95th=[ 996], 00:12:33.625 | 99.99th=[ 996] 00:12:33.625 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:33.625 slat (usec): min=21, max=116, avg=44.98, stdev=10.75 00:12:33.625 clat (usec): min=108, max=1500, avg=256.68, stdev=75.87 00:12:33.625 lat (usec): min=145, max=1530, avg=301.67, stdev=76.41 00:12:33.625 clat percentiles (usec): 00:12:33.626 | 1.00th=[ 147], 5.00th=[ 184], 10.00th=[ 202], 20.00th=[ 221], 00:12:33.626 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 262], 00:12:33.626 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 330], 00:12:33.626 | 99.00th=[ 371], 99.50th=[ 429], 99.90th=[ 1450], 99.95th=[ 1500], 00:12:33.626 | 99.99th=[ 1500] 00:12:33.626 bw ( KiB/s): min= 6912, max= 6912, per=21.11%, avg=6912.00, stdev= 0.00, samples=1 00:12:33.626 iops : min= 1728, max= 1728, avg=1728.00, stdev= 0.00, samples=1 00:12:33.626 lat (usec) : 250=28.56%, 500=66.38%, 750=4.83%, 1000=0.04% 00:12:33.626 lat (msec) : 2=0.18% 00:12:33.626 cpu : usr=2.60%, sys=8.10%, ctx=2712, majf=0, minf=15 00:12:33.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.626 issued rwts: total=1174,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.626 job2: (groupid=0, jobs=1): err= 0: pid=70434: Wed Nov 6 13:42:14 2024 00:12:33.626 read: IOPS=1254, BW=5019KiB/s (5139kB/s)(5024KiB/1001msec) 00:12:33.626 slat (nsec): min=26558, max=97440, avg=32807.60, stdev=5673.27 00:12:33.626 clat (usec): min=266, max=1953, avg=350.67, stdev=67.59 00:12:33.626 lat (usec): min=299, max=1984, avg=383.48, stdev=68.02 00:12:33.626 clat percentiles (usec): 00:12:33.626 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 326], 00:12:33.626 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:12:33.626 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 400], 00:12:33.626 | 99.00th=[ 424], 99.50th=[ 453], 99.90th=[ 1369], 99.95th=[ 1958], 00:12:33.626 | 99.99th=[ 1958] 00:12:33.626 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:33.626 slat (usec): min=41, max=146, avg=47.83, stdev= 7.87 00:12:33.626 clat (usec): min=203, max=1466, avg=283.68, stdev=40.36 00:12:33.626 lat (usec): min=248, max=1513, avg=331.51, stdev=41.01 00:12:33.626 clat percentiles (usec): 00:12:33.626 | 1.00th=[ 227], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 262], 00:12:33.626 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:12:33.626 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:12:33.626 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 660], 99.95th=[ 1467], 00:12:33.626 | 99.99th=[ 1467] 00:12:33.626 bw ( KiB/s): min= 7184, max= 7184, per=21.95%, avg=7184.00, stdev= 0.00, samples=1 00:12:33.626 iops : min= 1796, max= 1796, avg=1796.00, stdev= 0.00, samples=1 00:12:33.626 lat (usec) : 250=5.37%, 500=94.41%, 750=0.07% 00:12:33.626 lat (msec) : 2=0.14% 00:12:33.626 cpu : usr=2.00%, sys=9.40%, ctx=2792, majf=0, minf=13 00:12:33.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.626 issued rwts: total=1256,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.626 job3: (groupid=0, jobs=1): err= 0: pid=70435: Wed Nov 6 13:42:14 2024 00:12:33.626 read: IOPS=1572, BW=6290KiB/s (6441kB/s)(6296KiB/1001msec) 00:12:33.626 slat (nsec): min=8142, max=57493, avg=12449.56, stdev=6861.30 00:12:33.626 clat (usec): min=119, max=2261, avg=279.78, stdev=82.36 00:12:33.626 lat (usec): min=128, max=2269, avg=292.23, stdev=84.29 00:12:33.626 clat percentiles (usec): 00:12:33.626 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 198], 20.00th=[ 235], 00:12:33.626 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:12:33.626 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 363], 00:12:33.626 | 99.00th=[ 437], 99.50th=[ 510], 99.90th=[ 1467], 99.95th=[ 2278], 00:12:33.626 | 99.99th=[ 2278] 00:12:33.626 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:33.626 slat (usec): min=11, max=102, avg=34.83, stdev=12.03 00:12:33.626 clat (usec): min=107, max=3643, avg=225.76, stdev=97.65 00:12:33.626 lat (usec): min=121, max=3672, avg=260.59, stdev=99.60 00:12:33.626 clat percentiles (usec): 00:12:33.626 | 1.00th=[ 116], 5.00th=[ 128], 10.00th=[ 139], 20.00th=[ 167], 00:12:33.626 | 30.00th=[ 194], 40.00th=[ 215], 50.00th=[ 229], 60.00th=[ 243], 00:12:33.626 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 318], 00:12:33.626 | 99.00th=[ 363], 99.50th=[ 400], 99.90th=[ 510], 99.95th=[ 1139], 00:12:33.626 | 99.99th=[ 3654] 00:12:33.626 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:12:33.626 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:33.626 lat (usec) : 250=49.61%, 500=50.06%, 750=0.19% 00:12:33.626 lat (msec) : 2=0.08%, 4=0.06% 00:12:33.626 cpu : usr=1.30%, sys=7.40%, ctx=3622, majf=0, minf=5 00:12:33.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.626 issued rwts: total=1574,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.626 00:12:33.626 Run status group 0 (all jobs): 00:12:33.626 READ: bw=26.8MiB/s (28.1MB/s), 4691KiB/s-11.2MiB/s (4804kB/s-11.7MB/s), io=26.8MiB (28.1MB), run=1001-1001msec 00:12:33.626 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:12:33.626 00:12:33.626 Disk stats (read/write): 00:12:33.626 nvme0n1: ios=2582/2560, merge=0/0, ticks=499/362, in_queue=861, util=88.68% 00:12:33.626 nvme0n2: ios=1073/1296, merge=0/0, ticks=461/344, in_queue=805, util=88.70% 00:12:33.626 nvme0n3: ios=1063/1383, merge=0/0, ticks=396/400, in_queue=796, util=90.39% 00:12:33.626 nvme0n4: ios=1571/1559, merge=0/0, ticks=470/367, in_queue=837, util=90.97% 00:12:33.626 13:42:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:33.626 [global] 00:12:33.626 thread=1 00:12:33.626 invalidate=1 00:12:33.626 rw=write 00:12:33.626 time_based=1 00:12:33.626 runtime=1 00:12:33.626 ioengine=libaio 00:12:33.626 direct=1 00:12:33.626 bs=4096 00:12:33.626 iodepth=128 00:12:33.626 norandommap=0 00:12:33.626 numjobs=1 00:12:33.626 00:12:33.626 verify_dump=1 00:12:33.626 verify_backlog=512 00:12:33.626 verify_state_save=0 00:12:33.626 do_verify=1 00:12:33.626 verify=crc32c-intel 00:12:33.626 [job0] 00:12:33.626 filename=/dev/nvme0n1 00:12:33.626 [job1] 00:12:33.626 filename=/dev/nvme0n2 00:12:33.626 [job2] 00:12:33.626 filename=/dev/nvme0n3 00:12:33.626 [job3] 00:12:33.626 filename=/dev/nvme0n4 00:12:33.626 Could not set queue depth (nvme0n1) 00:12:33.626 Could not set queue depth (nvme0n2) 00:12:33.626 Could not set queue depth (nvme0n3) 00:12:33.626 Could not set queue depth (nvme0n4) 00:12:33.885 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.885 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.885 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.885 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.885 fio-3.35 00:12:33.885 Starting 4 threads 00:12:35.299 00:12:35.299 job0: (groupid=0, jobs=1): err= 0: pid=70495: Wed Nov 6 13:42:15 2024 00:12:35.299 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:12:35.299 slat (usec): min=7, max=10783, avg=240.95, stdev=1200.62 00:12:35.299 clat (usec): min=19647, max=46953, avg=30891.09, stdev=4141.64 00:12:35.299 lat (usec): min=20554, max=46962, avg=31132.04, stdev=4037.30 00:12:35.299 clat percentiles (usec): 00:12:35.299 | 1.00th=[22414], 5.00th=[25035], 10.00th=[26084], 20.00th=[27919], 00:12:35.299 | 30.00th=[28967], 40.00th=[30016], 50.00th=[30278], 60.00th=[30802], 00:12:35.299 | 70.00th=[31851], 80.00th=[33424], 90.00th=[36439], 95.00th=[39060], 00:12:35.299 | 99.00th=[43254], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:12:35.299 | 99.99th=[46924] 00:12:35.299 write: IOPS=2413, BW=9655KiB/s (9887kB/s)(9684KiB/1003msec); 0 zone resets 00:12:35.299 slat (usec): min=12, max=9361, avg=198.86, stdev=956.62 00:12:35.299 clat (usec): min=1509, max=40143, avg=25902.15, stdev=6163.17 00:12:35.299 lat (usec): min=5185, max=40182, avg=26101.00, stdev=6115.47 00:12:35.299 clat percentiles (usec): 00:12:35.299 | 1.00th=[ 5604], 5.00th=[17433], 10.00th=[18220], 20.00th=[20317], 00:12:35.299 | 30.00th=[21365], 40.00th=[23725], 50.00th=[27395], 60.00th=[28967], 00:12:35.299 | 70.00th=[30016], 80.00th=[31327], 90.00th=[32900], 95.00th=[34866], 00:12:35.299 | 99.00th=[36439], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:12:35.299 | 99.99th=[40109] 00:12:35.299 bw ( KiB/s): min= 8208, max=10152, per=20.37%, avg=9180.00, stdev=1374.62, samples=2 00:12:35.299 iops : min= 2052, max= 2538, avg=2295.00, stdev=343.65, samples=2 00:12:35.299 lat (msec) : 2=0.02%, 10=0.72%, 20=8.23%, 50=91.03% 00:12:35.299 cpu : usr=2.30%, sys=9.18%, ctx=153, majf=0, minf=4 00:12:35.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:35.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.299 issued rwts: total=2048,2421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.299 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.299 job1: (groupid=0, jobs=1): err= 0: pid=70496: Wed Nov 6 13:42:15 2024 00:12:35.299 read: IOPS=2513, BW=9.82MiB/s (10.3MB/s)(9.86MiB/1004msec) 00:12:35.299 slat (usec): min=18, max=6494, avg=201.79, stdev=858.19 00:12:35.300 clat (usec): min=2318, max=32562, avg=25252.03, stdev=3548.99 00:12:35.300 lat (usec): min=4397, max=32617, avg=25453.83, stdev=3513.69 00:12:35.300 clat percentiles (usec): 00:12:35.300 | 1.00th=[10159], 5.00th=[21103], 10.00th=[22414], 20.00th=[23725], 00:12:35.300 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25822], 60.00th=[26608], 00:12:35.300 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28443], 95.00th=[29492], 00:12:35.300 | 99.00th=[31065], 99.50th=[31851], 99.90th=[32637], 99.95th=[32637], 00:12:35.300 | 99.99th=[32637] 00:12:35.300 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:12:35.300 slat (usec): min=12, max=5929, avg=180.02, stdev=646.03 00:12:35.300 clat (usec): min=17491, max=30190, avg=24513.48, stdev=2042.83 00:12:35.300 lat (usec): min=17577, max=30229, avg=24693.50, stdev=1967.95 00:12:35.300 clat percentiles (usec): 00:12:35.300 | 1.00th=[18220], 5.00th=[22152], 10.00th=[22414], 20.00th=[22938], 00:12:35.300 | 30.00th=[23200], 40.00th=[23725], 50.00th=[24249], 60.00th=[25035], 00:12:35.300 | 70.00th=[25822], 80.00th=[26346], 90.00th=[26870], 95.00th=[27919], 00:12:35.300 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30278], 99.95th=[30278], 00:12:35.300 | 99.99th=[30278] 00:12:35.300 bw ( KiB/s): min= 8960, max=11520, per=22.73%, avg=10240.00, stdev=1810.19, samples=2 00:12:35.300 iops : min= 2240, max= 2880, avg=2560.00, stdev=452.55, samples=2 00:12:35.300 lat (msec) : 4=0.02%, 10=0.41%, 20=2.54%, 50=97.03% 00:12:35.300 cpu : usr=3.19%, sys=10.37%, ctx=352, majf=0, minf=1 00:12:35.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:35.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.300 issued rwts: total=2524,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.300 job2: (groupid=0, jobs=1): err= 0: pid=70497: Wed Nov 6 13:42:15 2024 00:12:35.300 read: IOPS=4024, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1003msec) 00:12:35.300 slat (usec): min=18, max=5305, avg=121.25, stdev=560.80 00:12:35.300 clat (usec): min=340, max=23130, avg=15685.95, stdev=2200.76 00:12:35.300 lat (usec): min=4528, max=23204, avg=15807.19, stdev=2188.23 00:12:35.300 clat percentiles (usec): 00:12:35.300 | 1.00th=[ 5604], 5.00th=[12518], 10.00th=[13042], 20.00th=[14091], 00:12:35.300 | 30.00th=[14877], 40.00th=[15401], 50.00th=[16057], 60.00th=[16188], 00:12:35.300 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18220], 95.00th=[18744], 00:12:35.300 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21627], 99.95th=[21890], 00:12:35.300 | 99.99th=[23200] 00:12:35.300 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:12:35.300 slat (usec): min=23, max=4708, avg=112.48, stdev=431.35 00:12:35.300 clat (usec): min=10939, max=21002, avg=15444.62, stdev=1648.78 00:12:35.300 lat (usec): min=10971, max=21036, avg=15557.10, stdev=1622.34 00:12:35.300 clat percentiles (usec): 00:12:35.300 | 1.00th=[11600], 5.00th=[12125], 10.00th=[12518], 20.00th=[14746], 00:12:35.300 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:12:35.300 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:12:35.300 | 99.00th=[19530], 99.50th=[20317], 99.90th=[20841], 99.95th=[21103], 00:12:35.300 | 99.99th=[21103] 00:12:35.300 bw ( KiB/s): min=16384, max=16416, per=36.40%, avg=16400.00, stdev=22.63, samples=2 00:12:35.300 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:12:35.300 lat (usec) : 500=0.01% 00:12:35.300 lat (msec) : 10=0.57%, 20=98.89%, 50=0.53% 00:12:35.300 cpu : usr=5.39%, sys=17.47%, ctx=519, majf=0, minf=1 00:12:35.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:35.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.300 issued rwts: total=4037,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.300 job3: (groupid=0, jobs=1): err= 0: pid=70498: Wed Nov 6 13:42:15 2024 00:12:35.300 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:12:35.300 slat (usec): min=6, max=7435, avg=218.53, stdev=1024.11 00:12:35.300 clat (usec): min=18445, max=42553, avg=28550.84, stdev=3297.81 00:12:35.300 lat (usec): min=20392, max=43377, avg=28769.37, stdev=3241.29 00:12:35.300 clat percentiles (usec): 00:12:35.300 | 1.00th=[20841], 5.00th=[23200], 10.00th=[24511], 20.00th=[25822], 00:12:35.300 | 30.00th=[26608], 40.00th=[27657], 50.00th=[28443], 60.00th=[29492], 00:12:35.300 | 70.00th=[30540], 80.00th=[31589], 90.00th=[32375], 95.00th=[33162], 00:12:35.300 | 99.00th=[36963], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:12:35.300 | 99.99th=[42730] 00:12:35.300 write: IOPS=2223, BW=8892KiB/s (9106kB/s)(8928KiB/1004msec); 0 zone resets 00:12:35.300 slat (usec): min=9, max=7768, avg=237.66, stdev=975.69 00:12:35.300 clat (usec): min=2490, max=49195, avg=30182.51, stdev=6003.02 00:12:35.300 lat (usec): min=6436, max=49230, avg=30420.17, stdev=5955.17 00:12:35.300 clat percentiles (usec): 00:12:35.300 | 1.00th=[11994], 5.00th=[23200], 10.00th=[24511], 20.00th=[26346], 00:12:35.300 | 30.00th=[27657], 40.00th=[28443], 50.00th=[28967], 60.00th=[30278], 00:12:35.300 | 70.00th=[31065], 80.00th=[33162], 90.00th=[40633], 95.00th=[42730], 00:12:35.300 | 99.00th=[44827], 99.50th=[47973], 99.90th=[49021], 99.95th=[49021], 00:12:35.300 | 99.99th=[49021] 00:12:35.300 bw ( KiB/s): min= 8064, max= 8785, per=18.70%, avg=8424.50, stdev=509.82, samples=2 00:12:35.300 iops : min= 2016, max= 2196, avg=2106.00, stdev=127.28, samples=2 00:12:35.300 lat (msec) : 4=0.02%, 10=0.19%, 20=0.96%, 50=98.83% 00:12:35.300 cpu : usr=2.09%, sys=8.18%, ctx=268, majf=0, minf=3 00:12:35.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:12:35.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.300 issued rwts: total=2048,2232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.300 00:12:35.300 Run status group 0 (all jobs): 00:12:35.300 READ: bw=41.5MiB/s (43.5MB/s), 8159KiB/s-15.7MiB/s (8355kB/s-16.5MB/s), io=41.6MiB (43.7MB), run=1003-1004msec 00:12:35.300 WRITE: bw=44.0MiB/s (46.1MB/s), 8892KiB/s-16.0MiB/s (9106kB/s-16.7MB/s), io=44.2MiB (46.3MB), run=1003-1004msec 00:12:35.300 00:12:35.300 Disk stats (read/write): 00:12:35.300 nvme0n1: ios=1791/2048, merge=0/0, ticks=13062/12280, in_queue=25342, util=90.57% 00:12:35.300 nvme0n2: ios=2097/2375, merge=0/0, ticks=12735/13320, in_queue=26055, util=89.60% 00:12:35.300 nvme0n3: ios=3483/3584, merge=0/0, ticks=16467/15509, in_queue=31976, util=90.68% 00:12:35.300 nvme0n4: ios=1725/2048, merge=0/0, ticks=11716/14227, in_queue=25943, util=90.63% 00:12:35.300 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:35.300 [global] 00:12:35.300 thread=1 00:12:35.300 invalidate=1 00:12:35.300 rw=randwrite 00:12:35.300 time_based=1 00:12:35.300 runtime=1 00:12:35.300 ioengine=libaio 00:12:35.300 direct=1 00:12:35.300 bs=4096 00:12:35.300 iodepth=128 00:12:35.300 norandommap=0 00:12:35.300 numjobs=1 00:12:35.300 00:12:35.300 verify_dump=1 00:12:35.300 verify_backlog=512 00:12:35.300 verify_state_save=0 00:12:35.300 do_verify=1 00:12:35.300 verify=crc32c-intel 00:12:35.300 [job0] 00:12:35.300 filename=/dev/nvme0n1 00:12:35.300 [job1] 00:12:35.300 filename=/dev/nvme0n2 00:12:35.300 [job2] 00:12:35.300 filename=/dev/nvme0n3 00:12:35.300 [job3] 00:12:35.300 filename=/dev/nvme0n4 00:12:35.300 Could not set queue depth (nvme0n1) 00:12:35.300 Could not set queue depth (nvme0n2) 00:12:35.300 Could not set queue depth (nvme0n3) 00:12:35.300 Could not set queue depth (nvme0n4) 00:12:35.300 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.300 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.300 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.300 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.300 fio-3.35 00:12:35.300 Starting 4 threads 00:12:36.679 00:12:36.679 job0: (groupid=0, jobs=1): err= 0: pid=70557: Wed Nov 6 13:42:17 2024 00:12:36.679 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:12:36.679 slat (usec): min=7, max=20549, avg=169.27, stdev=958.39 00:12:36.679 clat (usec): min=4798, max=86229, avg=22939.17, stdev=19342.59 00:12:36.679 lat (usec): min=8718, max=86249, avg=23108.44, stdev=19460.71 00:12:36.679 clat percentiles (usec): 00:12:36.679 | 1.00th=[11207], 5.00th=[11863], 10.00th=[12387], 20.00th=[13304], 00:12:36.679 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14484], 00:12:36.679 | 70.00th=[14746], 80.00th=[15795], 90.00th=[55313], 95.00th=[68682], 00:12:36.679 | 99.00th=[81265], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:12:36.679 | 99.99th=[86508] 00:12:36.679 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:12:36.679 slat (usec): min=8, max=13854, avg=174.62, stdev=855.26 00:12:36.679 clat (msec): min=8, max=107, avg=22.27, stdev=21.78 00:12:36.679 lat (msec): min=8, max=107, avg=22.44, stdev=21.93 00:12:36.679 clat percentiles (msec): 00:12:36.679 | 1.00th=[ 12], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:12:36.679 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:12:36.679 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 58], 95.00th=[ 84], 00:12:36.679 | 99.00th=[ 101], 99.50th=[ 105], 99.90th=[ 105], 99.95th=[ 108], 00:12:36.679 | 99.99th=[ 108] 00:12:36.679 bw ( KiB/s): min= 3608, max=20016, per=28.39%, avg=11812.00, stdev=11602.21, samples=2 00:12:36.679 iops : min= 902, max= 5004, avg=2953.00, stdev=2900.55, samples=2 00:12:36.679 lat (msec) : 10=0.14%, 20=80.98%, 50=4.95%, 100=13.32%, 250=0.60% 00:12:36.679 cpu : usr=2.98%, sys=11.52%, ctx=499, majf=0, minf=1 00:12:36.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:36.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:36.679 issued rwts: total=2565,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:36.679 job1: (groupid=0, jobs=1): err= 0: pid=70558: Wed Nov 6 13:42:17 2024 00:12:36.679 read: IOPS=1081, BW=4326KiB/s (4430kB/s)(4356KiB/1007msec) 00:12:36.679 slat (usec): min=9, max=35219, avg=429.81, stdev=2498.33 00:12:36.679 clat (usec): min=1471, max=106951, avg=50539.15, stdev=21471.87 00:12:36.679 lat (msec): min=7, max=106, avg=50.97, stdev=21.48 00:12:36.679 clat percentiles (msec): 00:12:36.679 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 29], 20.00th=[ 34], 00:12:36.679 | 30.00th=[ 39], 40.00th=[ 43], 50.00th=[ 46], 60.00th=[ 56], 00:12:36.679 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 81], 95.00th=[ 97], 00:12:36.679 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 107], 00:12:36.679 | 99.99th=[ 107] 00:12:36.679 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:12:36.679 slat (usec): min=31, max=20338, avg=338.91, stdev=1877.35 00:12:36.679 clat (usec): min=19600, max=73912, avg=44882.28, stdev=15769.01 00:12:36.679 lat (usec): min=21522, max=73947, avg=45221.19, stdev=15768.65 00:12:36.679 clat percentiles (usec): 00:12:36.679 | 1.00th=[21627], 5.00th=[24773], 10.00th=[25035], 20.00th=[26084], 00:12:36.679 | 30.00th=[28443], 40.00th=[38536], 50.00th=[52167], 60.00th=[54264], 00:12:36.679 | 70.00th=[56886], 80.00th=[58459], 90.00th=[61604], 95.00th=[68682], 00:12:36.679 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:12:36.679 | 99.99th=[73925] 00:12:36.679 bw ( KiB/s): min= 4104, max= 7672, per=14.15%, avg=5888.00, stdev=2522.96, samples=2 00:12:36.679 iops : min= 1026, max= 1918, avg=1472.00, stdev=630.74, samples=2 00:12:36.679 lat (msec) : 2=0.04%, 10=1.22%, 20=1.33%, 50=48.61%, 100=47.62% 00:12:36.679 lat (msec) : 250=1.18% 00:12:36.679 cpu : usr=1.49%, sys=5.77%, ctx=84, majf=0, minf=7 00:12:36.679 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:12:36.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:36.679 issued rwts: total=1089,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:36.679 job2: (groupid=0, jobs=1): err= 0: pid=70559: Wed Nov 6 13:42:17 2024 00:12:36.679 read: IOPS=4086, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:12:36.679 slat (usec): min=18, max=5035, avg=113.07, stdev=529.32 00:12:36.679 clat (usec): min=496, max=20654, avg=14940.04, stdev=1609.52 00:12:36.679 lat (usec): min=3580, max=20697, avg=15053.12, stdev=1603.04 00:12:36.679 clat percentiles (usec): 00:12:36.679 | 1.00th=[11207], 5.00th=[11994], 10.00th=[12518], 20.00th=[13566], 00:12:36.679 | 30.00th=[14222], 40.00th=[14877], 50.00th=[15270], 60.00th=[15664], 00:12:36.679 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16581], 95.00th=[17171], 00:12:36.679 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19530], 00:12:36.679 | 99.99th=[20579] 00:12:36.679 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:12:36.679 slat (usec): min=14, max=4417, avg=104.56, stdev=399.99 00:12:36.679 clat (usec): min=3645, max=19402, avg=14190.29, stdev=1970.36 00:12:36.679 lat (usec): min=3688, max=19443, avg=14294.85, stdev=1957.92 00:12:36.679 clat percentiles (usec): 00:12:36.679 | 1.00th=[ 8586], 5.00th=[11076], 10.00th=[11600], 20.00th=[12649], 00:12:36.679 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14484], 60.00th=[14877], 00:12:36.679 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16188], 95.00th=[16712], 00:12:36.679 | 99.00th=[17957], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:12:36.679 | 99.99th=[19530] 00:12:36.679 bw ( KiB/s): min=16416, max=19441, per=43.09%, avg=17928.50, stdev=2139.00, samples=2 00:12:36.679 iops : min= 4104, max= 4860, avg=4482.00, stdev=534.57, samples=2 00:12:36.679 lat (usec) : 500=0.01% 00:12:36.679 lat (msec) : 4=0.10%, 10=0.86%, 20=99.01%, 50=0.01% 00:12:36.679 cpu : usr=6.09%, sys=18.66%, ctx=556, majf=0, minf=6 00:12:36.680 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:36.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:36.680 issued rwts: total=4099,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:36.680 job3: (groupid=0, jobs=1): err= 0: pid=70560: Wed Nov 6 13:42:17 2024 00:12:36.680 read: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec) 00:12:36.680 slat (usec): min=7, max=19748, avg=451.81, stdev=1990.83 00:12:36.680 clat (usec): min=37059, max=79270, avg=57344.99, stdev=8562.06 00:12:36.680 lat (usec): min=38742, max=82875, avg=57796.80, stdev=8515.80 00:12:36.680 clat percentiles (usec): 00:12:36.680 | 1.00th=[43779], 5.00th=[46924], 10.00th=[48497], 20.00th=[50070], 00:12:36.680 | 30.00th=[53740], 40.00th=[54789], 50.00th=[55313], 60.00th=[56361], 00:12:36.680 | 70.00th=[57410], 80.00th=[61604], 90.00th=[71828], 95.00th=[74974], 00:12:36.680 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:12:36.680 | 99.99th=[79168] 00:12:36.680 write: IOPS=1268, BW=5074KiB/s (5196kB/s)(5120KiB/1009msec); 0 zone resets 00:12:36.680 slat (usec): min=9, max=17245, avg=410.87, stdev=1658.33 00:12:36.680 clat (msec): min=7, max=106, avg=52.94, stdev=18.49 00:12:36.680 lat (msec): min=8, max=106, avg=53.35, stdev=18.52 00:12:36.680 clat percentiles (msec): 00:12:36.680 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 42], 00:12:36.680 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 46], 60.00th=[ 52], 00:12:36.680 | 70.00th=[ 56], 80.00th=[ 65], 90.00th=[ 86], 95.00th=[ 96], 00:12:36.680 | 99.00th=[ 104], 99.50th=[ 104], 99.90th=[ 107], 99.95th=[ 107], 00:12:36.680 | 99.99th=[ 107] 00:12:36.680 bw ( KiB/s): min= 3409, max= 5808, per=11.07%, avg=4608.50, stdev=1696.35, samples=2 00:12:36.680 iops : min= 852, max= 1452, avg=1152.00, stdev=424.26, samples=2 00:12:36.680 lat (msec) : 10=0.09%, 20=0.48%, 50=38.45%, 100=59.46%, 250=1.52% 00:12:36.680 cpu : usr=0.99%, sys=4.56%, ctx=251, majf=0, minf=6 00:12:36.680 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:12:36.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:36.680 issued rwts: total=1024,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:36.680 00:12:36.680 Run status group 0 (all jobs): 00:12:36.680 READ: bw=34.0MiB/s (35.6MB/s), 4059KiB/s-16.0MiB/s (4157kB/s-16.7MB/s), io=34.3MiB (35.9MB), run=1003-1009msec 00:12:36.680 WRITE: bw=40.6MiB/s (42.6MB/s), 5074KiB/s-17.9MiB/s (5196kB/s-18.8MB/s), io=41.0MiB (43.0MB), run=1003-1009msec 00:12:36.680 00:12:36.680 Disk stats (read/write): 00:12:36.680 nvme0n1: ios=2610/2857, merge=0/0, ticks=13656/11001, in_queue=24657, util=88.16% 00:12:36.680 nvme0n2: ios=945/1024, merge=0/0, ticks=12832/13189, in_queue=26021, util=88.66% 00:12:36.680 nvme0n3: ios=3614/3863, merge=0/0, ticks=16352/15460, in_queue=31812, util=90.38% 00:12:36.680 nvme0n4: ios=1041/1067, merge=0/0, ticks=14418/11629, in_queue=26047, util=89.81% 00:12:36.680 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:36.680 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70573 00:12:36.680 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:36.680 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:36.680 [global] 00:12:36.680 thread=1 00:12:36.680 invalidate=1 00:12:36.680 rw=read 00:12:36.680 time_based=1 00:12:36.680 runtime=10 00:12:36.680 ioengine=libaio 00:12:36.680 direct=1 00:12:36.680 bs=4096 00:12:36.680 iodepth=1 00:12:36.680 norandommap=1 00:12:36.680 numjobs=1 00:12:36.680 00:12:36.680 [job0] 00:12:36.680 filename=/dev/nvme0n1 00:12:36.680 [job1] 00:12:36.680 filename=/dev/nvme0n2 00:12:36.680 [job2] 00:12:36.680 filename=/dev/nvme0n3 00:12:36.680 [job3] 00:12:36.680 filename=/dev/nvme0n4 00:12:36.680 Could not set queue depth (nvme0n1) 00:12:36.680 Could not set queue depth (nvme0n2) 00:12:36.680 Could not set queue depth (nvme0n3) 00:12:36.680 Could not set queue depth (nvme0n4) 00:12:36.680 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.680 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.680 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.680 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.680 fio-3.35 00:12:36.680 Starting 4 threads 00:12:40.008 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:40.008 fio: pid=70616, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:40.008 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=32301056, buflen=4096 00:12:40.008 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:40.008 fio: pid=70615, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:40.008 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37097472, buflen=4096 00:12:40.008 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.008 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:40.267 fio: pid=70613, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:40.267 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42000384, buflen=4096 00:12:40.267 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.267 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:40.527 fio: pid=70614, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:40.527 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6488064, buflen=4096 00:12:40.527 00:12:40.527 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70613: Wed Nov 6 13:42:21 2024 00:12:40.527 read: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(40.1MiB/3266msec) 00:12:40.527 slat (usec): min=6, max=16735, avg=15.74, stdev=257.52 00:12:40.527 clat (usec): min=110, max=4649, avg=301.66, stdev=114.41 00:12:40.527 lat (usec): min=121, max=16940, avg=317.40, stdev=280.41 00:12:40.527 clat percentiles (usec): 00:12:40.527 | 1.00th=[ 130], 5.00th=[ 143], 10.00th=[ 159], 20.00th=[ 251], 00:12:40.528 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 326], 00:12:40.528 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 392], 00:12:40.528 | 99.00th=[ 490], 99.50th=[ 529], 99.90th=[ 1029], 99.95th=[ 2311], 00:12:40.528 | 99.99th=[ 4178] 00:12:40.528 bw ( KiB/s): min=11104, max=12983, per=23.37%, avg=11993.00, stdev=601.02, samples=6 00:12:40.528 iops : min= 2776, max= 3245, avg=2998.00, stdev=150.00, samples=6 00:12:40.528 lat (usec) : 250=19.80%, 500=79.41%, 750=0.61%, 1000=0.07% 00:12:40.528 lat (msec) : 2=0.05%, 4=0.04%, 10=0.02% 00:12:40.528 cpu : usr=0.52%, sys=3.12%, ctx=10262, majf=0, minf=1 00:12:40.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.528 issued rwts: total=10255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.528 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70614: Wed Nov 6 13:42:21 2024 00:12:40.528 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(70.2MiB/3520msec) 00:12:40.528 slat (usec): min=9, max=12757, avg=14.47, stdev=191.97 00:12:40.528 clat (usec): min=109, max=2274, avg=180.63, stdev=42.53 00:12:40.528 lat (usec): min=119, max=13096, avg=195.10, stdev=197.59 00:12:40.528 clat percentiles (usec): 00:12:40.528 | 1.00th=[ 123], 5.00th=[ 135], 10.00th=[ 147], 20.00th=[ 157], 00:12:40.528 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 182], 00:12:40.528 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 223], 95.00th=[ 239], 00:12:40.528 | 99.00th=[ 277], 99.50th=[ 302], 99.90th=[ 457], 99.95th=[ 603], 00:12:40.528 | 99.99th=[ 1975] 00:12:40.528 bw ( KiB/s): min=19800, max=20247, per=39.04%, avg=20038.00, stdev=169.93, samples=6 00:12:40.528 iops : min= 4950, max= 5061, avg=5009.00, stdev=42.23, samples=6 00:12:40.528 lat (usec) : 250=96.82%, 500=3.08%, 750=0.06%, 1000=0.01% 00:12:40.528 lat (msec) : 2=0.02%, 4=0.01% 00:12:40.528 cpu : usr=0.80%, sys=4.49%, ctx=17979, majf=0, minf=2 00:12:40.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.528 issued rwts: total=17969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.528 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70615: Wed Nov 6 13:42:21 2024 00:12:40.528 read: IOPS=2973, BW=11.6MiB/s (12.2MB/s)(35.4MiB/3046msec) 00:12:40.528 slat (usec): min=6, max=11288, avg=28.81, stdev=156.76 00:12:40.528 clat (usec): min=121, max=7800, avg=305.20, stdev=166.44 00:12:40.528 lat (usec): min=133, max=11535, avg=334.01, stdev=228.34 00:12:40.528 clat percentiles (usec): 00:12:40.528 | 1.00th=[ 165], 5.00th=[ 217], 10.00th=[ 241], 20.00th=[ 265], 00:12:40.528 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 310], 00:12:40.528 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 371], 00:12:40.528 | 99.00th=[ 437], 99.50th=[ 510], 99.90th=[ 3163], 99.95th=[ 3982], 00:12:40.528 | 99.99th=[ 7832] 00:12:40.528 bw ( KiB/s): min=11392, max=12168, per=22.91%, avg=11760.20, stdev=317.43, samples=5 00:12:40.528 iops : min= 2848, max= 3042, avg=2940.00, stdev=79.41, samples=5 00:12:40.528 lat (usec) : 250=13.27%, 500=86.17%, 750=0.26%, 1000=0.07% 00:12:40.528 lat (msec) : 2=0.07%, 4=0.11%, 10=0.04% 00:12:40.528 cpu : usr=1.28%, sys=7.16%, ctx=9084, majf=0, minf=2 00:12:40.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.528 issued rwts: total=9058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.528 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70616: Wed Nov 6 13:42:21 2024 00:12:40.528 read: IOPS=2782, BW=10.9MiB/s (11.4MB/s)(30.8MiB/2834msec) 00:12:40.528 slat (usec): min=18, max=458, avg=31.32, stdev= 7.64 00:12:40.528 clat (usec): min=242, max=6022, avg=324.88, stdev=129.97 00:12:40.528 lat (usec): min=267, max=6052, avg=356.19, stdev=130.36 00:12:40.528 clat percentiles (usec): 00:12:40.528 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:12:40.528 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:12:40.528 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 367], 00:12:40.528 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 1958], 99.95th=[ 4621], 00:12:40.528 | 99.99th=[ 5997] 00:12:40.528 bw ( KiB/s): min=10992, max=11297, per=21.75%, avg=11163.80, stdev=136.60, samples=5 00:12:40.528 iops : min= 2748, max= 2824, avg=2790.80, stdev=34.02, samples=5 00:12:40.528 lat (usec) : 250=0.13%, 500=99.58%, 750=0.06%, 1000=0.01% 00:12:40.528 lat (msec) : 2=0.11%, 4=0.04%, 10=0.05% 00:12:40.528 cpu : usr=1.62%, sys=7.77%, ctx=7888, majf=0, minf=2 00:12:40.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.528 issued rwts: total=7887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.528 00:12:40.528 Run status group 0 (all jobs): 00:12:40.528 READ: bw=50.1MiB/s (52.6MB/s), 10.9MiB/s-19.9MiB/s (11.4MB/s-20.9MB/s), io=176MiB (185MB), run=2834-3520msec 00:12:40.528 00:12:40.528 Disk stats (read/write): 00:12:40.528 nvme0n1: ios=9387/0, merge=0/0, ticks=2996/0, in_queue=2996, util=94.82% 00:12:40.528 nvme0n2: ios=17009/0, merge=0/0, ticks=3202/0, in_queue=3202, util=95.09% 00:12:40.528 nvme0n3: ios=8494/0, merge=0/0, ticks=2662/0, in_queue=2662, util=96.13% 00:12:40.528 nvme0n4: ios=7317/0, merge=0/0, ticks=2401/0, in_queue=2401, util=96.54% 00:12:40.528 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.528 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:40.788 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.788 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:41.047 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.047 13:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:41.306 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.306 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:41.566 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.566 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:41.825 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:41.825 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70573 00:12:41.825 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:41.825 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.825 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.825 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:12:41.825 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:41.825 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.085 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.085 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:42.085 nvmf hotplug test: fio failed as expected 00:12:42.085 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:12:42.085 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:42.085 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:42.085 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.085 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.345 rmmod nvme_tcp 00:12:42.345 rmmod nvme_fabrics 00:12:42.345 rmmod nvme_keyring 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70078 ']' 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70078 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 70078 ']' 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 70078 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70078 00:12:42.345 killing process with pid 70078 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70078' 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 70078 00:12:42.345 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 70078 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:42.605 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:42.865 00:12:42.865 real 0m20.406s 00:12:42.865 user 1m15.978s 00:12:42.865 sys 0m9.614s 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.865 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.865 ************************************ 00:12:42.865 END TEST nvmf_fio_target 00:12:42.865 ************************************ 00:12:43.124 13:42:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:43.124 13:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:43.124 13:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:43.124 13:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:43.124 ************************************ 00:12:43.124 START TEST nvmf_bdevio 00:12:43.124 ************************************ 00:12:43.124 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:43.124 * Looking for test storage... 00:12:43.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:43.124 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:43.124 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:12:43.124 13:42:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.124 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:43.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.125 --rc genhtml_branch_coverage=1 00:12:43.125 --rc genhtml_function_coverage=1 00:12:43.125 --rc genhtml_legend=1 00:12:43.125 --rc geninfo_all_blocks=1 00:12:43.125 --rc geninfo_unexecuted_blocks=1 00:12:43.125 00:12:43.125 ' 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:43.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.125 --rc genhtml_branch_coverage=1 00:12:43.125 --rc genhtml_function_coverage=1 00:12:43.125 --rc genhtml_legend=1 00:12:43.125 --rc geninfo_all_blocks=1 00:12:43.125 --rc geninfo_unexecuted_blocks=1 00:12:43.125 00:12:43.125 ' 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:43.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.125 --rc genhtml_branch_coverage=1 00:12:43.125 --rc genhtml_function_coverage=1 00:12:43.125 --rc genhtml_legend=1 00:12:43.125 --rc geninfo_all_blocks=1 00:12:43.125 --rc geninfo_unexecuted_blocks=1 00:12:43.125 00:12:43.125 ' 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:43.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.125 --rc genhtml_branch_coverage=1 00:12:43.125 --rc genhtml_function_coverage=1 00:12:43.125 --rc genhtml_legend=1 00:12:43.125 --rc geninfo_all_blocks=1 00:12:43.125 --rc geninfo_unexecuted_blocks=1 00:12:43.125 00:12:43.125 ' 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:43.125 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:43.385 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:43.386 Cannot find device "nvmf_init_br" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:43.386 Cannot find device "nvmf_init_br2" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:43.386 Cannot find device "nvmf_tgt_br" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:43.386 Cannot find device "nvmf_tgt_br2" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:43.386 Cannot find device "nvmf_init_br" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:43.386 Cannot find device "nvmf_init_br2" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:43.386 Cannot find device "nvmf_tgt_br" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:43.386 Cannot find device "nvmf_tgt_br2" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:43.386 Cannot find device "nvmf_br" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:43.386 Cannot find device "nvmf_init_if" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:43.386 Cannot find device "nvmf_init_if2" 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:43.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:43.386 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:43.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:43.645 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:43.646 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:43.905 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:43.905 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:12:43.905 00:12:43.905 --- 10.0.0.3 ping statistics --- 00:12:43.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.905 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:43.905 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:43.905 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:12:43.905 00:12:43.905 --- 10.0.0.4 ping statistics --- 00:12:43.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.905 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:43.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:43.905 00:12:43.905 --- 10.0.0.1 ping statistics --- 00:12:43.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.905 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:43.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:12:43.905 00:12:43.905 --- 10.0.0.2 ping statistics --- 00:12:43.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.905 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.905 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=71001 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 71001 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 71001 ']' 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:43.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:43.906 13:42:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:43.906 [2024-11-06 13:42:24.730745] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:12:43.906 [2024-11-06 13:42:24.730833] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.165 [2024-11-06 13:42:24.882894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.165 [2024-11-06 13:42:24.958929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.165 [2024-11-06 13:42:24.958988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.165 [2024-11-06 13:42:24.959000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.165 [2024-11-06 13:42:24.959011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.165 [2024-11-06 13:42:24.959020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.165 [2024-11-06 13:42:24.960778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:44.165 [2024-11-06 13:42:24.960926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:44.165 [2024-11-06 13:42:24.961040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.165 [2024-11-06 13:42:24.961041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.103 [2024-11-06 13:42:25.763081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.103 Malloc0 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.103 [2024-11-06 13:42:25.848362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:45.103 { 00:12:45.103 "params": { 00:12:45.103 "name": "Nvme$subsystem", 00:12:45.103 "trtype": "$TEST_TRANSPORT", 00:12:45.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:45.103 "adrfam": "ipv4", 00:12:45.103 "trsvcid": "$NVMF_PORT", 00:12:45.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:45.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:45.103 "hdgst": ${hdgst:-false}, 00:12:45.103 "ddgst": ${ddgst:-false} 00:12:45.103 }, 00:12:45.103 "method": "bdev_nvme_attach_controller" 00:12:45.103 } 00:12:45.103 EOF 00:12:45.103 )") 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:45.103 13:42:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:45.103 "params": { 00:12:45.103 "name": "Nvme1", 00:12:45.103 "trtype": "tcp", 00:12:45.103 "traddr": "10.0.0.3", 00:12:45.103 "adrfam": "ipv4", 00:12:45.103 "trsvcid": "4420", 00:12:45.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.103 "hdgst": false, 00:12:45.103 "ddgst": false 00:12:45.103 }, 00:12:45.103 "method": "bdev_nvme_attach_controller" 00:12:45.103 }' 00:12:45.103 [2024-11-06 13:42:25.912134] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:12:45.103 [2024-11-06 13:42:25.912209] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:12:45.362 [2024-11-06 13:42:26.069820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:45.362 [2024-11-06 13:42:26.142070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.362 [2024-11-06 13:42:26.142267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.362 [2024-11-06 13:42:26.142269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.621 I/O targets: 00:12:45.621 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:45.621 00:12:45.621 00:12:45.621 CUnit - A unit testing framework for C - Version 2.1-3 00:12:45.621 http://cunit.sourceforge.net/ 00:12:45.621 00:12:45.621 00:12:45.621 Suite: bdevio tests on: Nvme1n1 00:12:45.621 Test: blockdev write read block ...passed 00:12:45.621 Test: blockdev write zeroes read block ...passed 00:12:45.621 Test: blockdev write zeroes read no split ...passed 00:12:45.621 Test: blockdev write zeroes read split ...passed 00:12:45.621 Test: blockdev write zeroes read split partial ...passed 00:12:45.621 Test: blockdev reset ...[2024-11-06 13:42:26.480313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:45.621 [2024-11-06 13:42:26.480426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108ff50 (9): Bad file descriptor 00:12:45.621 [2024-11-06 13:42:26.496132] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:45.621 passed 00:12:45.621 Test: blockdev write read 8 blocks ...passed 00:12:45.621 Test: blockdev write read size > 128k ...passed 00:12:45.621 Test: blockdev write read invalid size ...passed 00:12:45.621 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.621 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.621 Test: blockdev write read max offset ...passed 00:12:45.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:45.881 Test: blockdev writev readv 8 blocks ...passed 00:12:45.881 Test: blockdev writev readv 30 x 1block ...passed 00:12:45.881 Test: blockdev writev readv block ...passed 00:12:45.881 Test: blockdev writev readv size > 128k ...passed 00:12:45.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:45.881 Test: blockdev comparev and writev ...[2024-11-06 13:42:26.674461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:45.881 [2024-11-06 13:42:26.674519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.674537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:45.881 [2024-11-06 13:42:26.674546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.675163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:45.881 [2024-11-06 13:42:26.675188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.675202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:45.881 [2024-11-06 13:42:26.675213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.675656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:45.881 [2024-11-06 13:42:26.675677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.675691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:45.881 [2024-11-06 13:42:26.675701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.676151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:45.881 [2024-11-06 13:42:26.676172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.676186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:45.881 [2024-11-06 13:42:26.676197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:45.881 passed 00:12:45.881 Test: blockdev nvme passthru rw ...passed 00:12:45.881 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:42:26.760374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:45.881 [2024-11-06 13:42:26.760412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.760568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:45.881 [2024-11-06 13:42:26.760584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:45.881 [2024-11-06 13:42:26.760746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:45.881 [2024-11-06 13:42:26.760761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:45.881 passed 00:12:45.882 Test: blockdev nvme admin passthru ...[2024-11-06 13:42:26.760886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:45.882 [2024-11-06 13:42:26.760903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:45.882 passed 00:12:46.141 Test: blockdev copy ...passed 00:12:46.141 00:12:46.141 Run Summary: Type Total Ran Passed Failed Inactive 00:12:46.141 suites 1 1 n/a 0 0 00:12:46.141 tests 23 23 23 0 0 00:12:46.141 asserts 152 152 152 0 n/a 00:12:46.141 00:12:46.141 Elapsed time = 0.929 seconds 00:12:46.141 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.142 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.142 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:46.142 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.142 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:46.142 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:46.142 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.142 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.401 rmmod nvme_tcp 00:12:46.401 rmmod nvme_fabrics 00:12:46.401 rmmod nvme_keyring 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 71001 ']' 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 71001 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 71001 ']' 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 71001 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71001 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:12:46.401 killing process with pid 71001 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71001' 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 71001 00:12:46.401 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 71001 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:46.660 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.919 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:47.178 00:12:47.178 real 0m4.041s 00:12:47.178 user 0m12.717s 00:12:47.178 sys 0m1.257s 00:12:47.178 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:47.178 13:42:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 ************************************ 00:12:47.178 END TEST nvmf_bdevio 00:12:47.178 ************************************ 00:12:47.178 13:42:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:47.178 ************************************ 00:12:47.178 END TEST nvmf_target_core 00:12:47.178 ************************************ 00:12:47.178 00:12:47.178 real 3m41.740s 00:12:47.178 user 11m5.292s 00:12:47.178 sys 1m19.014s 00:12:47.178 13:42:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:47.178 13:42:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 13:42:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:47.178 13:42:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:47.178 13:42:27 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:47.178 13:42:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:47.178 ************************************ 00:12:47.178 START TEST nvmf_target_extra 00:12:47.178 ************************************ 00:12:47.178 13:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:47.178 * Looking for test storage... 00:12:47.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:47.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.438 --rc genhtml_branch_coverage=1 00:12:47.438 --rc genhtml_function_coverage=1 00:12:47.438 --rc genhtml_legend=1 00:12:47.438 --rc geninfo_all_blocks=1 00:12:47.438 --rc geninfo_unexecuted_blocks=1 00:12:47.438 00:12:47.438 ' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:47.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.438 --rc genhtml_branch_coverage=1 00:12:47.438 --rc genhtml_function_coverage=1 00:12:47.438 --rc genhtml_legend=1 00:12:47.438 --rc geninfo_all_blocks=1 00:12:47.438 --rc geninfo_unexecuted_blocks=1 00:12:47.438 00:12:47.438 ' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:47.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.438 --rc genhtml_branch_coverage=1 00:12:47.438 --rc genhtml_function_coverage=1 00:12:47.438 --rc genhtml_legend=1 00:12:47.438 --rc geninfo_all_blocks=1 00:12:47.438 --rc geninfo_unexecuted_blocks=1 00:12:47.438 00:12:47.438 ' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:47.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.438 --rc genhtml_branch_coverage=1 00:12:47.438 --rc genhtml_function_coverage=1 00:12:47.438 --rc genhtml_legend=1 00:12:47.438 --rc geninfo_all_blocks=1 00:12:47.438 --rc geninfo_unexecuted_blocks=1 00:12:47.438 00:12:47.438 ' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.438 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.438 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.439 ************************************ 00:12:47.439 START TEST nvmf_example 00:12:47.439 ************************************ 00:12:47.439 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:47.699 * Looking for test storage... 00:12:47.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:47.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.699 --rc genhtml_branch_coverage=1 00:12:47.699 --rc genhtml_function_coverage=1 00:12:47.699 --rc genhtml_legend=1 00:12:47.699 --rc geninfo_all_blocks=1 00:12:47.699 --rc geninfo_unexecuted_blocks=1 00:12:47.699 00:12:47.699 ' 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:47.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.699 --rc genhtml_branch_coverage=1 00:12:47.699 --rc genhtml_function_coverage=1 00:12:47.699 --rc genhtml_legend=1 00:12:47.699 --rc geninfo_all_blocks=1 00:12:47.699 --rc geninfo_unexecuted_blocks=1 00:12:47.699 00:12:47.699 ' 00:12:47.699 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:47.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.699 --rc genhtml_branch_coverage=1 00:12:47.699 --rc genhtml_function_coverage=1 00:12:47.699 --rc genhtml_legend=1 00:12:47.699 --rc geninfo_all_blocks=1 00:12:47.699 --rc geninfo_unexecuted_blocks=1 00:12:47.700 00:12:47.700 ' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:47.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.700 --rc genhtml_branch_coverage=1 00:12:47.700 --rc genhtml_function_coverage=1 00:12:47.700 --rc genhtml_legend=1 00:12:47.700 --rc geninfo_all_blocks=1 00:12:47.700 --rc geninfo_unexecuted_blocks=1 00:12:47.700 00:12:47.700 ' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:47.700 Cannot find device "nvmf_init_br" 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:47.700 Cannot find device "nvmf_init_br2" 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:47.700 Cannot find device "nvmf_tgt_br" 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:12:47.700 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:47.960 Cannot find device "nvmf_tgt_br2" 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:47.960 Cannot find device "nvmf_init_br" 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:47.960 Cannot find device "nvmf_init_br2" 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:47.960 Cannot find device "nvmf_tgt_br" 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:47.960 Cannot find device "nvmf_tgt_br2" 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:47.960 Cannot find device "nvmf_br" 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:47.960 Cannot find device "nvmf_init_if" 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:47.960 Cannot find device "nvmf_init_if2" 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:47.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:47.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:47.960 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:48.220 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:48.220 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.220 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:12:48.220 00:12:48.220 --- 10.0.0.3 ping statistics --- 00:12:48.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.220 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:48.220 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:48.220 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:12:48.220 00:12:48.220 --- 10.0.0.4 ping statistics --- 00:12:48.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.220 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:12:48.220 00:12:48.220 --- 10.0.0.1 ping statistics --- 00:12:48.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.220 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:48.220 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:48.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:12:48.220 00:12:48.220 --- 10.0.0.2 ping statistics --- 00:12:48.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.220 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71352 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71352 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 71352 ']' 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:48.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.221 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:49.206 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:49.206 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:12:49.206 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:49.206 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:49.206 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:12:49.466 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:01.678 Initializing NVMe Controllers 00:13:01.678 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:01.678 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:01.678 Initialization complete. Launching workers. 00:13:01.678 ======================================================== 00:13:01.678 Latency(us) 00:13:01.678 Device Information : IOPS MiB/s Average min max 00:13:01.678 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17040.33 66.56 3756.82 606.41 21468.93 00:13:01.678 ======================================================== 00:13:01.678 Total : 17040.33 66.56 3756.82 606.41 21468.93 00:13:01.678 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.678 rmmod nvme_tcp 00:13:01.678 rmmod nvme_fabrics 00:13:01.678 rmmod nvme_keyring 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 71352 ']' 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 71352 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 71352 ']' 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 71352 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71352 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:13:01.678 killing process with pid 71352 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71352' 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 71352 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 71352 00:13:01.678 nvmf threads initialize successfully 00:13:01.678 bdev subsystem init successfully 00:13:01.678 created a nvmf target service 00:13:01.678 create targets's poll groups done 00:13:01.678 all subsystems of target started 00:13:01.678 nvmf target is running 00:13:01.678 all subsystems of target stopped 00:13:01.678 destroy targets's poll groups done 00:13:01.678 destroyed the nvmf target service 00:13:01.678 bdev subsystem finish successfully 00:13:01.678 nvmf threads destroy successfully 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:01.678 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.678 00:13:01.678 real 0m13.046s 00:13:01.678 user 0m44.305s 00:13:01.678 sys 0m2.715s 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.678 ************************************ 00:13:01.678 END TEST nvmf_example 00:13:01.678 ************************************ 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.678 ************************************ 00:13:01.678 START TEST nvmf_filesystem 00:13:01.678 ************************************ 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:01.678 * Looking for test storage... 00:13:01.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:01.678 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.679 --rc genhtml_branch_coverage=1 00:13:01.679 --rc genhtml_function_coverage=1 00:13:01.679 --rc genhtml_legend=1 00:13:01.679 --rc geninfo_all_blocks=1 00:13:01.679 --rc geninfo_unexecuted_blocks=1 00:13:01.679 00:13:01.679 ' 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.679 --rc genhtml_branch_coverage=1 00:13:01.679 --rc genhtml_function_coverage=1 00:13:01.679 --rc genhtml_legend=1 00:13:01.679 --rc geninfo_all_blocks=1 00:13:01.679 --rc geninfo_unexecuted_blocks=1 00:13:01.679 00:13:01.679 ' 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.679 --rc genhtml_branch_coverage=1 00:13:01.679 --rc genhtml_function_coverage=1 00:13:01.679 --rc genhtml_legend=1 00:13:01.679 --rc geninfo_all_blocks=1 00:13:01.679 --rc geninfo_unexecuted_blocks=1 00:13:01.679 00:13:01.679 ' 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.679 --rc genhtml_branch_coverage=1 00:13:01.679 --rc genhtml_function_coverage=1 00:13:01.679 --rc genhtml_legend=1 00:13:01.679 --rc geninfo_all_blocks=1 00:13:01.679 --rc geninfo_unexecuted_blocks=1 00:13:01.679 00:13:01.679 ' 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:01.679 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:01.680 #define SPDK_CONFIG_H 00:13:01.680 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:01.680 #define SPDK_CONFIG_APPS 1 00:13:01.680 #define SPDK_CONFIG_ARCH native 00:13:01.680 #undef SPDK_CONFIG_ASAN 00:13:01.680 #define SPDK_CONFIG_AVAHI 1 00:13:01.680 #undef SPDK_CONFIG_CET 00:13:01.680 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:01.680 #define SPDK_CONFIG_COVERAGE 1 00:13:01.680 #define SPDK_CONFIG_CROSS_PREFIX 00:13:01.680 #undef SPDK_CONFIG_CRYPTO 00:13:01.680 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:01.680 #undef SPDK_CONFIG_CUSTOMOCF 00:13:01.680 #undef SPDK_CONFIG_DAOS 00:13:01.680 #define SPDK_CONFIG_DAOS_DIR 00:13:01.680 #define SPDK_CONFIG_DEBUG 1 00:13:01.680 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:01.680 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:01.680 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:01.680 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:01.680 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:01.680 #undef SPDK_CONFIG_DPDK_UADK 00:13:01.680 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:01.680 #define SPDK_CONFIG_EXAMPLES 1 00:13:01.680 #undef SPDK_CONFIG_FC 00:13:01.680 #define SPDK_CONFIG_FC_PATH 00:13:01.680 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:01.680 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:01.680 #define SPDK_CONFIG_FSDEV 1 00:13:01.680 #undef SPDK_CONFIG_FUSE 00:13:01.680 #undef SPDK_CONFIG_FUZZER 00:13:01.680 #define SPDK_CONFIG_FUZZER_LIB 00:13:01.680 #define SPDK_CONFIG_GOLANG 1 00:13:01.680 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:01.680 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:01.680 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:01.680 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:01.680 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:01.680 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:01.680 #undef SPDK_CONFIG_HAVE_LZ4 00:13:01.680 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:01.680 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:01.680 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:01.680 #define SPDK_CONFIG_IDXD 1 00:13:01.680 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:01.680 #undef SPDK_CONFIG_IPSEC_MB 00:13:01.680 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:01.680 #define SPDK_CONFIG_ISAL 1 00:13:01.680 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:01.680 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:01.680 #define SPDK_CONFIG_LIBDIR 00:13:01.680 #undef SPDK_CONFIG_LTO 00:13:01.680 #define SPDK_CONFIG_MAX_LCORES 128 00:13:01.680 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:01.680 #define SPDK_CONFIG_NVME_CUSE 1 00:13:01.680 #undef SPDK_CONFIG_OCF 00:13:01.680 #define SPDK_CONFIG_OCF_PATH 00:13:01.680 #define SPDK_CONFIG_OPENSSL_PATH 00:13:01.680 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:01.680 #define SPDK_CONFIG_PGO_DIR 00:13:01.680 #undef SPDK_CONFIG_PGO_USE 00:13:01.680 #define SPDK_CONFIG_PREFIX /usr/local 00:13:01.680 #undef SPDK_CONFIG_RAID5F 00:13:01.680 #undef SPDK_CONFIG_RBD 00:13:01.680 #define SPDK_CONFIG_RDMA 1 00:13:01.680 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:01.680 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:01.680 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:01.680 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:01.680 #define SPDK_CONFIG_SHARED 1 00:13:01.680 #undef SPDK_CONFIG_SMA 00:13:01.680 #define SPDK_CONFIG_TESTS 1 00:13:01.680 #undef SPDK_CONFIG_TSAN 00:13:01.680 #define SPDK_CONFIG_UBLK 1 00:13:01.680 #define SPDK_CONFIG_UBSAN 1 00:13:01.680 #undef SPDK_CONFIG_UNIT_TESTS 00:13:01.680 #undef SPDK_CONFIG_URING 00:13:01.680 #define SPDK_CONFIG_URING_PATH 00:13:01.680 #undef SPDK_CONFIG_URING_ZNS 00:13:01.680 #define SPDK_CONFIG_USDT 1 00:13:01.680 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:01.680 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:01.680 #undef SPDK_CONFIG_VFIO_USER 00:13:01.680 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:01.680 #define SPDK_CONFIG_VHOST 1 00:13:01.680 #define SPDK_CONFIG_VIRTIO 1 00:13:01.680 #undef SPDK_CONFIG_VTUNE 00:13:01.680 #define SPDK_CONFIG_VTUNE_DIR 00:13:01.680 #define SPDK_CONFIG_WERROR 1 00:13:01.680 #define SPDK_CONFIG_WPDK_DIR 00:13:01.680 #undef SPDK_CONFIG_XNVME 00:13:01.680 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.680 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:01.681 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:01.682 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j10 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 71629 ]] 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 71629 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.Fs366x 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.Fs366x/tests/target /tmp/spdk.Fs366x 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13981286400 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5587484672 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=devtmpfs 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4194304 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4194304 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6256398336 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266429440 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=2486431744 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=2506571776 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=20140032 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.683 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13981286400 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5587484672 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext4 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=840085504 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1012768768 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=103477248 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6266290176 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266429440 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=139264 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda3 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=vfat 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=91617280 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=104607744 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12990464 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=1253273600 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1253285888 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=fuse.sshfs 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=93730242560 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=105088212992 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5972537344 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:01.684 * Looking for test storage... 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/home 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=13981286400 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == tmpfs ]] 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == ramfs ]] 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ /home == / ]] 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:01.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.684 --rc genhtml_branch_coverage=1 00:13:01.684 --rc genhtml_function_coverage=1 00:13:01.684 --rc genhtml_legend=1 00:13:01.684 --rc geninfo_all_blocks=1 00:13:01.684 --rc geninfo_unexecuted_blocks=1 00:13:01.684 00:13:01.684 ' 00:13:01.684 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:01.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.685 --rc genhtml_branch_coverage=1 00:13:01.685 --rc genhtml_function_coverage=1 00:13:01.685 --rc genhtml_legend=1 00:13:01.685 --rc geninfo_all_blocks=1 00:13:01.685 --rc geninfo_unexecuted_blocks=1 00:13:01.685 00:13:01.685 ' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:01.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.685 --rc genhtml_branch_coverage=1 00:13:01.685 --rc genhtml_function_coverage=1 00:13:01.685 --rc genhtml_legend=1 00:13:01.685 --rc geninfo_all_blocks=1 00:13:01.685 --rc geninfo_unexecuted_blocks=1 00:13:01.685 00:13:01.685 ' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:01.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.685 --rc genhtml_branch_coverage=1 00:13:01.685 --rc genhtml_function_coverage=1 00:13:01.685 --rc genhtml_legend=1 00:13:01.685 --rc geninfo_all_blocks=1 00:13:01.685 --rc geninfo_unexecuted_blocks=1 00:13:01.685 00:13:01.685 ' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:01.685 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:01.685 Cannot find device "nvmf_init_br" 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:13:01.685 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:01.685 Cannot find device "nvmf_init_br2" 00:13:01.685 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:13:01.685 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:01.685 Cannot find device "nvmf_tgt_br" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:01.686 Cannot find device "nvmf_tgt_br2" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:01.686 Cannot find device "nvmf_init_br" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:01.686 Cannot find device "nvmf_init_br2" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:01.686 Cannot find device "nvmf_tgt_br" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:01.686 Cannot find device "nvmf_tgt_br2" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:01.686 Cannot find device "nvmf_br" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:01.686 Cannot find device "nvmf_init_if" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:01.686 Cannot find device "nvmf_init_if2" 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:01.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:01.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:01.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:01.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:13:01.686 00:13:01.686 --- 10.0.0.3 ping statistics --- 00:13:01.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.686 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:01.686 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:01.686 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:13:01.686 00:13:01.686 --- 10.0.0.4 ping statistics --- 00:13:01.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.686 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:01.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:01.686 00:13:01.686 --- 10.0.0.1 ping statistics --- 00:13:01.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.686 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:01.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:13:01.686 00:13:01.686 --- 10.0.0.2 ping statistics --- 00:13:01.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.686 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:01.686 ************************************ 00:13:01.686 START TEST nvmf_filesystem_no_in_capsule 00:13:01.686 ************************************ 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71818 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71818 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 71818 ']' 00:13:01.686 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.687 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:01.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.687 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.687 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:01.687 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.687 [2024-11-06 13:42:42.550523] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:13:01.687 [2024-11-06 13:42:42.550616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.945 [2024-11-06 13:42:42.702598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.945 [2024-11-06 13:42:42.779330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.945 [2024-11-06 13:42:42.779519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.945 [2024-11-06 13:42:42.779609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.945 [2024-11-06 13:42:42.779656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.945 [2024-11-06 13:42:42.779683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.945 [2024-11-06 13:42:42.781047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.945 [2024-11-06 13:42:42.781242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.945 [2024-11-06 13:42:42.782111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.945 [2024-11-06 13:42:42.782111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 [2024-11-06 13:42:43.524532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 Malloc1 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 [2024-11-06 13:42:43.767648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:02.881 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:02.882 { 00:13:02.882 "aliases": [ 00:13:02.882 "f1d85fbe-e713-4bc4-ab3c-72177a72b2e9" 00:13:02.882 ], 00:13:02.882 "assigned_rate_limits": { 00:13:02.882 "r_mbytes_per_sec": 0, 00:13:02.882 "rw_ios_per_sec": 0, 00:13:02.882 "rw_mbytes_per_sec": 0, 00:13:02.882 "w_mbytes_per_sec": 0 00:13:02.882 }, 00:13:02.882 "block_size": 512, 00:13:02.882 "claim_type": "exclusive_write", 00:13:02.882 "claimed": true, 00:13:02.882 "driver_specific": {}, 00:13:02.882 "memory_domains": [ 00:13:02.882 { 00:13:02.882 "dma_device_id": "system", 00:13:02.882 "dma_device_type": 1 00:13:02.882 }, 00:13:02.882 { 00:13:02.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.882 "dma_device_type": 2 00:13:02.882 } 00:13:02.882 ], 00:13:02.882 "name": "Malloc1", 00:13:02.882 "num_blocks": 1048576, 00:13:02.882 "product_name": "Malloc disk", 00:13:02.882 "supported_io_types": { 00:13:02.882 "abort": true, 00:13:02.882 "compare": false, 00:13:02.882 "compare_and_write": false, 00:13:02.882 "copy": true, 00:13:02.882 "flush": true, 00:13:02.882 "get_zone_info": false, 00:13:02.882 "nvme_admin": false, 00:13:02.882 "nvme_io": false, 00:13:02.882 "nvme_io_md": false, 00:13:02.882 "nvme_iov_md": false, 00:13:02.882 "read": true, 00:13:02.882 "reset": true, 00:13:02.882 "seek_data": false, 00:13:02.882 "seek_hole": false, 00:13:02.882 "unmap": true, 00:13:02.882 "write": true, 00:13:02.882 "write_zeroes": true, 00:13:02.882 "zcopy": true, 00:13:02.882 "zone_append": false, 00:13:02.882 "zone_management": false 00:13:02.882 }, 00:13:02.882 "uuid": "f1d85fbe-e713-4bc4-ab3c-72177a72b2e9", 00:13:02.882 "zoned": false 00:13:02.882 } 00:13:02.882 ]' 00:13:02.882 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:03.140 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:03.140 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:03.140 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:03.140 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:03.140 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:03.140 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:03.140 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:03.140 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.140 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:03.140 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.140 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:03.140 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:05.664 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:06.597 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.598 ************************************ 00:13:06.598 START TEST filesystem_ext4 00:13:06.598 ************************************ 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:06.598 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:06.598 mke2fs 1.47.0 (5-Feb-2023) 00:13:06.598 Discarding device blocks: 0/522240 done 00:13:06.598 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:06.598 Filesystem UUID: eaf8e0da-d8ec-41d9-8d6d-d28b76d056ef 00:13:06.598 Superblock backups stored on blocks: 00:13:06.598 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:06.598 00:13:06.598 Allocating group tables: 0/64 done 00:13:06.598 Writing inode tables: 0/64 done 00:13:06.598 Creating journal (8192 blocks): done 00:13:06.855 Writing superblocks and filesystem accounting information: 0/64 done 00:13:06.855 00:13:06.855 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:06.855 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:12.154 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:12.154 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:12.154 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:12.154 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:12.154 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:12.154 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:12.154 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71818 00:13:12.154 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:12.154 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:12.411 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:12.411 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:12.411 ************************************ 00:13:12.411 END TEST filesystem_ext4 00:13:12.411 ************************************ 00:13:12.411 00:13:12.411 real 0m5.752s 00:13:12.411 user 0m0.030s 00:13:12.411 sys 0m0.110s 00:13:12.411 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:12.411 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:12.411 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:12.411 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:12.411 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:12.411 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.411 ************************************ 00:13:12.411 START TEST filesystem_btrfs 00:13:12.411 ************************************ 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:12.412 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:12.670 btrfs-progs v6.8.1 00:13:12.670 See https://btrfs.readthedocs.io for more information. 00:13:12.670 00:13:12.670 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:12.670 NOTE: several default settings have changed in version 5.15, please make sure 00:13:12.670 this does not affect your deployments: 00:13:12.670 - DUP for metadata (-m dup) 00:13:12.670 - enabled no-holes (-O no-holes) 00:13:12.670 - enabled free-space-tree (-R free-space-tree) 00:13:12.670 00:13:12.670 Label: (null) 00:13:12.670 UUID: 69464e4a-cfd5-4803-9059-06cde2e25f36 00:13:12.670 Node size: 16384 00:13:12.670 Sector size: 4096 (CPU page size: 4096) 00:13:12.670 Filesystem size: 510.00MiB 00:13:12.670 Block group profiles: 00:13:12.670 Data: single 8.00MiB 00:13:12.670 Metadata: DUP 32.00MiB 00:13:12.670 System: DUP 8.00MiB 00:13:12.670 SSD detected: yes 00:13:12.670 Zoned device: no 00:13:12.670 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:12.670 Checksum: crc32c 00:13:12.670 Number of devices: 1 00:13:12.670 Devices: 00:13:12.670 ID SIZE PATH 00:13:12.670 1 510.00MiB /dev/nvme0n1p1 00:13:12.670 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71818 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:12.670 ************************************ 00:13:12.670 END TEST filesystem_btrfs 00:13:12.670 ************************************ 00:13:12.670 00:13:12.670 real 0m0.373s 00:13:12.670 user 0m0.034s 00:13:12.670 sys 0m0.090s 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:12.670 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.928 ************************************ 00:13:12.928 START TEST filesystem_xfs 00:13:12.928 ************************************ 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:12.928 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:12.928 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:12.928 = sectsz=512 attr=2, projid32bit=1 00:13:12.928 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:12.928 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:12.928 data = bsize=4096 blocks=130560, imaxpct=25 00:13:12.928 = sunit=0 swidth=0 blks 00:13:12.928 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:12.928 log =internal log bsize=4096 blocks=16384, version=2 00:13:12.928 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:12.928 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:13.531 Discarding blocks...Done. 00:13:13.531 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:13.531 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71818 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.064 ************************************ 00:13:16.064 END TEST filesystem_xfs 00:13:16.064 ************************************ 00:13:16.064 00:13:16.064 real 0m3.127s 00:13:16.064 user 0m0.033s 00:13:16.064 sys 0m0.092s 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71818 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 71818 ']' 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 71818 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71818 00:13:16.064 killing process with pid 71818 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71818' 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 71818 00:13:16.064 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 71818 00:13:16.632 ************************************ 00:13:16.632 END TEST nvmf_filesystem_no_in_capsule 00:13:16.632 ************************************ 00:13:16.632 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:16.632 00:13:16.632 real 0m15.034s 00:13:16.632 user 0m57.003s 00:13:16.632 sys 0m2.851s 00:13:16.632 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.632 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.890 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:16.890 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:16.890 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:16.890 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:16.890 ************************************ 00:13:16.890 START TEST nvmf_filesystem_in_capsule 00:13:16.890 ************************************ 00:13:16.890 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:13:16.890 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:16.890 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72196 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72196 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 72196 ']' 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.891 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.891 [2024-11-06 13:42:57.651527] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:13:16.891 [2024-11-06 13:42:57.651619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.891 [2024-11-06 13:42:57.792963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.149 [2024-11-06 13:42:57.871455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.150 [2024-11-06 13:42:57.871519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.150 [2024-11-06 13:42:57.871530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.150 [2024-11-06 13:42:57.871539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.150 [2024-11-06 13:42:57.871546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.150 [2024-11-06 13:42:57.872959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.150 [2024-11-06 13:42:57.873061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.150 [2024-11-06 13:42:57.873174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.150 [2024-11-06 13:42:57.873176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.716 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:17.716 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:17.716 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:17.716 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.716 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.980 [2024-11-06 13:42:58.659377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.980 Malloc1 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.980 [2024-11-06 13:42:58.889771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.980 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.238 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.238 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:18.238 { 00:13:18.238 "aliases": [ 00:13:18.238 "62b82361-8b7b-4b06-91ee-1f29ad1b661f" 00:13:18.238 ], 00:13:18.238 "assigned_rate_limits": { 00:13:18.238 "r_mbytes_per_sec": 0, 00:13:18.238 "rw_ios_per_sec": 0, 00:13:18.238 "rw_mbytes_per_sec": 0, 00:13:18.238 "w_mbytes_per_sec": 0 00:13:18.238 }, 00:13:18.238 "block_size": 512, 00:13:18.238 "claim_type": "exclusive_write", 00:13:18.238 "claimed": true, 00:13:18.238 "driver_specific": {}, 00:13:18.238 "memory_domains": [ 00:13:18.238 { 00:13:18.238 "dma_device_id": "system", 00:13:18.238 "dma_device_type": 1 00:13:18.238 }, 00:13:18.238 { 00:13:18.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.238 "dma_device_type": 2 00:13:18.238 } 00:13:18.238 ], 00:13:18.238 "name": "Malloc1", 00:13:18.238 "num_blocks": 1048576, 00:13:18.238 "product_name": "Malloc disk", 00:13:18.238 "supported_io_types": { 00:13:18.238 "abort": true, 00:13:18.238 "compare": false, 00:13:18.238 "compare_and_write": false, 00:13:18.238 "copy": true, 00:13:18.238 "flush": true, 00:13:18.238 "get_zone_info": false, 00:13:18.238 "nvme_admin": false, 00:13:18.238 "nvme_io": false, 00:13:18.238 "nvme_io_md": false, 00:13:18.238 "nvme_iov_md": false, 00:13:18.238 "read": true, 00:13:18.238 "reset": true, 00:13:18.238 "seek_data": false, 00:13:18.238 "seek_hole": false, 00:13:18.238 "unmap": true, 00:13:18.238 "write": true, 00:13:18.238 "write_zeroes": true, 00:13:18.238 "zcopy": true, 00:13:18.238 "zone_append": false, 00:13:18.238 "zone_management": false 00:13:18.238 }, 00:13:18.238 "uuid": "62b82361-8b7b-4b06-91ee-1f29ad1b661f", 00:13:18.238 "zoned": false 00:13:18.238 } 00:13:18.238 ]' 00:13:18.238 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:18.238 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:18.238 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:18.238 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:18.238 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:18.238 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:18.238 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:18.238 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:18.497 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.497 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:18.497 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.497 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:18.497 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:20.394 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:20.395 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:20.395 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:20.395 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:20.395 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:20.652 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:20.653 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.587 ************************************ 00:13:21.587 START TEST filesystem_in_capsule_ext4 00:13:21.587 ************************************ 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:21.587 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:21.587 mke2fs 1.47.0 (5-Feb-2023) 00:13:21.845 Discarding device blocks: 0/522240 done 00:13:21.845 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:21.845 Filesystem UUID: bdd30940-6759-4410-8959-fbe0e428e847 00:13:21.845 Superblock backups stored on blocks: 00:13:21.845 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:21.845 00:13:21.845 Allocating group tables: 0/64 done 00:13:21.845 Writing inode tables: 0/64 done 00:13:21.845 Creating journal (8192 blocks): done 00:13:21.845 Writing superblocks and filesystem accounting information: 0/64 done 00:13:21.845 00:13:21.845 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:21.845 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:27.114 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:27.114 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:27.373 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:27.373 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:27.373 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:27.373 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72196 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:27.374 00:13:27.374 real 0m5.690s 00:13:27.374 user 0m0.036s 00:13:27.374 sys 0m0.090s 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:27.374 ************************************ 00:13:27.374 END TEST filesystem_in_capsule_ext4 00:13:27.374 ************************************ 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.374 ************************************ 00:13:27.374 START TEST filesystem_in_capsule_btrfs 00:13:27.374 ************************************ 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:27.374 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:27.632 btrfs-progs v6.8.1 00:13:27.632 See https://btrfs.readthedocs.io for more information. 00:13:27.632 00:13:27.632 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:27.632 NOTE: several default settings have changed in version 5.15, please make sure 00:13:27.632 this does not affect your deployments: 00:13:27.632 - DUP for metadata (-m dup) 00:13:27.632 - enabled no-holes (-O no-holes) 00:13:27.632 - enabled free-space-tree (-R free-space-tree) 00:13:27.632 00:13:27.632 Label: (null) 00:13:27.632 UUID: df911071-dd83-4859-bcc7-df28f35ef95d 00:13:27.632 Node size: 16384 00:13:27.632 Sector size: 4096 (CPU page size: 4096) 00:13:27.632 Filesystem size: 510.00MiB 00:13:27.632 Block group profiles: 00:13:27.632 Data: single 8.00MiB 00:13:27.632 Metadata: DUP 32.00MiB 00:13:27.632 System: DUP 8.00MiB 00:13:27.632 SSD detected: yes 00:13:27.632 Zoned device: no 00:13:27.632 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:27.632 Checksum: crc32c 00:13:27.632 Number of devices: 1 00:13:27.632 Devices: 00:13:27.632 ID SIZE PATH 00:13:27.632 1 510.00MiB /dev/nvme0n1p1 00:13:27.632 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72196 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:27.632 00:13:27.632 real 0m0.366s 00:13:27.632 user 0m0.027s 00:13:27.632 sys 0m0.099s 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:27.632 ************************************ 00:13:27.632 END TEST filesystem_in_capsule_btrfs 00:13:27.632 ************************************ 00:13:27.632 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.891 ************************************ 00:13:27.891 START TEST filesystem_in_capsule_xfs 00:13:27.891 ************************************ 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:27.891 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:27.892 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:13:27.892 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:27.892 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:27.892 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:27.892 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:27.892 = sectsz=512 attr=2, projid32bit=1 00:13:27.892 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:27.892 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:27.892 data = bsize=4096 blocks=130560, imaxpct=25 00:13:27.892 = sunit=0 swidth=0 blks 00:13:27.892 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:27.892 log =internal log bsize=4096 blocks=16384, version=2 00:13:27.892 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:27.892 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:28.827 Discarding blocks...Done. 00:13:28.827 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:28.827 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72196 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.732 ************************************ 00:13:30.732 END TEST filesystem_in_capsule_xfs 00:13:30.732 ************************************ 00:13:30.732 00:13:30.732 real 0m2.803s 00:13:30.732 user 0m0.032s 00:13:30.732 sys 0m0.088s 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.732 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72196 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 72196 ']' 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 72196 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72196 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72196' 00:13:30.991 killing process with pid 72196 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 72196 00:13:30.991 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 72196 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:31.560 00:13:31.560 real 0m14.706s 00:13:31.560 user 0m55.408s 00:13:31.560 sys 0m3.251s 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.560 ************************************ 00:13:31.560 END TEST nvmf_filesystem_in_capsule 00:13:31.560 ************************************ 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:31.560 rmmod nvme_tcp 00:13:31.560 rmmod nvme_fabrics 00:13:31.560 rmmod nvme_keyring 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:31.560 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.819 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:13:32.079 00:13:32.079 real 0m31.445s 00:13:32.079 user 1m52.957s 00:13:32.079 sys 0m6.919s 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:32.079 ************************************ 00:13:32.079 END TEST nvmf_filesystem 00:13:32.079 ************************************ 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.079 ************************************ 00:13:32.079 START TEST nvmf_target_discovery 00:13:32.079 ************************************ 00:13:32.079 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:32.340 * Looking for test storage... 00:13:32.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:32.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.340 --rc genhtml_branch_coverage=1 00:13:32.340 --rc genhtml_function_coverage=1 00:13:32.340 --rc genhtml_legend=1 00:13:32.340 --rc geninfo_all_blocks=1 00:13:32.340 --rc geninfo_unexecuted_blocks=1 00:13:32.340 00:13:32.340 ' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:32.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.340 --rc genhtml_branch_coverage=1 00:13:32.340 --rc genhtml_function_coverage=1 00:13:32.340 --rc genhtml_legend=1 00:13:32.340 --rc geninfo_all_blocks=1 00:13:32.340 --rc geninfo_unexecuted_blocks=1 00:13:32.340 00:13:32.340 ' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:32.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.340 --rc genhtml_branch_coverage=1 00:13:32.340 --rc genhtml_function_coverage=1 00:13:32.340 --rc genhtml_legend=1 00:13:32.340 --rc geninfo_all_blocks=1 00:13:32.340 --rc geninfo_unexecuted_blocks=1 00:13:32.340 00:13:32.340 ' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:32.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.340 --rc genhtml_branch_coverage=1 00:13:32.340 --rc genhtml_function_coverage=1 00:13:32.340 --rc genhtml_legend=1 00:13:32.340 --rc geninfo_all_blocks=1 00:13:32.340 --rc geninfo_unexecuted_blocks=1 00:13:32.340 00:13:32.340 ' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.340 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:32.341 Cannot find device "nvmf_init_br" 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:32.341 Cannot find device "nvmf_init_br2" 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:13:32.341 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:32.600 Cannot find device "nvmf_tgt_br" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.600 Cannot find device "nvmf_tgt_br2" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:32.600 Cannot find device "nvmf_init_br" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:32.600 Cannot find device "nvmf_init_br2" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:32.600 Cannot find device "nvmf_tgt_br" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:32.600 Cannot find device "nvmf_tgt_br2" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:32.600 Cannot find device "nvmf_br" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:32.600 Cannot find device "nvmf_init_if" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:32.600 Cannot find device "nvmf_init_if2" 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:13:32.600 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.601 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.861 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.861 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.861 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:32.861 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:32.861 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:32.861 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:32.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:13:32.862 00:13:32.862 --- 10.0.0.3 ping statistics --- 00:13:32.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.862 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:32.862 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:32.862 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:13:32.862 00:13:32.862 --- 10.0.0.4 ping statistics --- 00:13:32.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.862 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:32.862 00:13:32.862 --- 10.0.0.1 ping statistics --- 00:13:32.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.862 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:32.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:13:32.862 00:13:32.862 --- 10.0.0.2 ping statistics --- 00:13:32.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.862 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72793 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72793 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 72793 ']' 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:32.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.862 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.127 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:33.127 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.127 [2024-11-06 13:43:13.854046] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:13:33.127 [2024-11-06 13:43:13.854131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.127 [2024-11-06 13:43:14.010975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.385 [2024-11-06 13:43:14.081263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.385 [2024-11-06 13:43:14.081324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.385 [2024-11-06 13:43:14.081335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.385 [2024-11-06 13:43:14.081344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.385 [2024-11-06 13:43:14.081351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.385 [2024-11-06 13:43:14.082773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.385 [2024-11-06 13:43:14.082977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.385 [2024-11-06 13:43:14.083083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.385 [2024-11-06 13:43:14.083085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.952 [2024-11-06 13:43:14.818439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.952 Null1 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:33.952 [2024-11-06 13:43:14.878544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.952 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 Null2 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 Null3 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 Null4 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.211 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:34.212 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.212 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.212 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.212 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:13:34.212 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.212 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.212 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.212 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 4420 00:13:34.470 00:13:34.470 Discovery Log Number of Records 6, Generation counter 6 00:13:34.470 =====Discovery Log Entry 0====== 00:13:34.470 trtype: tcp 00:13:34.470 adrfam: ipv4 00:13:34.471 subtype: current discovery subsystem 00:13:34.471 treq: not required 00:13:34.471 portid: 0 00:13:34.471 trsvcid: 4420 00:13:34.471 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:34.471 traddr: 10.0.0.3 00:13:34.471 eflags: explicit discovery connections, duplicate discovery information 00:13:34.471 sectype: none 00:13:34.471 =====Discovery Log Entry 1====== 00:13:34.471 trtype: tcp 00:13:34.471 adrfam: ipv4 00:13:34.471 subtype: nvme subsystem 00:13:34.471 treq: not required 00:13:34.471 portid: 0 00:13:34.471 trsvcid: 4420 00:13:34.471 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:34.471 traddr: 10.0.0.3 00:13:34.471 eflags: none 00:13:34.471 sectype: none 00:13:34.471 =====Discovery Log Entry 2====== 00:13:34.471 trtype: tcp 00:13:34.471 adrfam: ipv4 00:13:34.471 subtype: nvme subsystem 00:13:34.471 treq: not required 00:13:34.471 portid: 0 00:13:34.471 trsvcid: 4420 00:13:34.471 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:34.471 traddr: 10.0.0.3 00:13:34.471 eflags: none 00:13:34.471 sectype: none 00:13:34.471 =====Discovery Log Entry 3====== 00:13:34.471 trtype: tcp 00:13:34.471 adrfam: ipv4 00:13:34.471 subtype: nvme subsystem 00:13:34.471 treq: not required 00:13:34.471 portid: 0 00:13:34.471 trsvcid: 4420 00:13:34.471 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:34.471 traddr: 10.0.0.3 00:13:34.471 eflags: none 00:13:34.471 sectype: none 00:13:34.471 =====Discovery Log Entry 4====== 00:13:34.471 trtype: tcp 00:13:34.471 adrfam: ipv4 00:13:34.471 subtype: nvme subsystem 00:13:34.471 treq: not required 00:13:34.471 portid: 0 00:13:34.471 trsvcid: 4420 00:13:34.471 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:34.471 traddr: 10.0.0.3 00:13:34.471 eflags: none 00:13:34.471 sectype: none 00:13:34.471 =====Discovery Log Entry 5====== 00:13:34.471 trtype: tcp 00:13:34.471 adrfam: ipv4 00:13:34.471 subtype: discovery subsystem referral 00:13:34.471 treq: not required 00:13:34.471 portid: 0 00:13:34.471 trsvcid: 4430 00:13:34.471 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:34.471 traddr: 10.0.0.3 00:13:34.471 eflags: none 00:13:34.471 sectype: none 00:13:34.471 Perform nvmf subsystem discovery via RPC 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 [ 00:13:34.471 { 00:13:34.471 "allow_any_host": true, 00:13:34.471 "hosts": [], 00:13:34.471 "listen_addresses": [ 00:13:34.471 { 00:13:34.471 "adrfam": "IPv4", 00:13:34.471 "traddr": "10.0.0.3", 00:13:34.471 "trsvcid": "4420", 00:13:34.471 "trtype": "TCP" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:34.471 "subtype": "Discovery" 00:13:34.471 }, 00:13:34.471 { 00:13:34.471 "allow_any_host": true, 00:13:34.471 "hosts": [], 00:13:34.471 "listen_addresses": [ 00:13:34.471 { 00:13:34.471 "adrfam": "IPv4", 00:13:34.471 "traddr": "10.0.0.3", 00:13:34.471 "trsvcid": "4420", 00:13:34.471 "trtype": "TCP" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "max_cntlid": 65519, 00:13:34.471 "max_namespaces": 32, 00:13:34.471 "min_cntlid": 1, 00:13:34.471 "model_number": "SPDK bdev Controller", 00:13:34.471 "namespaces": [ 00:13:34.471 { 00:13:34.471 "bdev_name": "Null1", 00:13:34.471 "name": "Null1", 00:13:34.471 "nguid": "A8CFFC99865E4346AEB1BA2BAFCC5F82", 00:13:34.471 "nsid": 1, 00:13:34.471 "uuid": "a8cffc99-865e-4346-aeb1-ba2bafcc5f82" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:34.471 "serial_number": "SPDK00000000000001", 00:13:34.471 "subtype": "NVMe" 00:13:34.471 }, 00:13:34.471 { 00:13:34.471 "allow_any_host": true, 00:13:34.471 "hosts": [], 00:13:34.471 "listen_addresses": [ 00:13:34.471 { 00:13:34.471 "adrfam": "IPv4", 00:13:34.471 "traddr": "10.0.0.3", 00:13:34.471 "trsvcid": "4420", 00:13:34.471 "trtype": "TCP" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "max_cntlid": 65519, 00:13:34.471 "max_namespaces": 32, 00:13:34.471 "min_cntlid": 1, 00:13:34.471 "model_number": "SPDK bdev Controller", 00:13:34.471 "namespaces": [ 00:13:34.471 { 00:13:34.471 "bdev_name": "Null2", 00:13:34.471 "name": "Null2", 00:13:34.471 "nguid": "8FE9F7DCC0184FAB89B9BC5097293AC7", 00:13:34.471 "nsid": 1, 00:13:34.471 "uuid": "8fe9f7dc-c018-4fab-89b9-bc5097293ac7" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:34.471 "serial_number": "SPDK00000000000002", 00:13:34.471 "subtype": "NVMe" 00:13:34.471 }, 00:13:34.471 { 00:13:34.471 "allow_any_host": true, 00:13:34.471 "hosts": [], 00:13:34.471 "listen_addresses": [ 00:13:34.471 { 00:13:34.471 "adrfam": "IPv4", 00:13:34.471 "traddr": "10.0.0.3", 00:13:34.471 "trsvcid": "4420", 00:13:34.471 "trtype": "TCP" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "max_cntlid": 65519, 00:13:34.471 "max_namespaces": 32, 00:13:34.471 "min_cntlid": 1, 00:13:34.471 "model_number": "SPDK bdev Controller", 00:13:34.471 "namespaces": [ 00:13:34.471 { 00:13:34.471 "bdev_name": "Null3", 00:13:34.471 "name": "Null3", 00:13:34.471 "nguid": "3536A7FF6CB24F5A91DFED17205FC737", 00:13:34.471 "nsid": 1, 00:13:34.471 "uuid": "3536a7ff-6cb2-4f5a-91df-ed17205fc737" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:34.471 "serial_number": "SPDK00000000000003", 00:13:34.471 "subtype": "NVMe" 00:13:34.471 }, 00:13:34.471 { 00:13:34.471 "allow_any_host": true, 00:13:34.471 "hosts": [], 00:13:34.471 "listen_addresses": [ 00:13:34.471 { 00:13:34.471 "adrfam": "IPv4", 00:13:34.471 "traddr": "10.0.0.3", 00:13:34.471 "trsvcid": "4420", 00:13:34.471 "trtype": "TCP" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "max_cntlid": 65519, 00:13:34.471 "max_namespaces": 32, 00:13:34.471 "min_cntlid": 1, 00:13:34.471 "model_number": "SPDK bdev Controller", 00:13:34.471 "namespaces": [ 00:13:34.471 { 00:13:34.471 "bdev_name": "Null4", 00:13:34.471 "name": "Null4", 00:13:34.471 "nguid": "10EAAFB1BF254E25819FF9CE4E444FEA", 00:13:34.471 "nsid": 1, 00:13:34.471 "uuid": "10eaafb1-bf25-4e25-819f-f9ce4e444fea" 00:13:34.471 } 00:13:34.471 ], 00:13:34.471 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:34.471 "serial_number": "SPDK00000000000004", 00:13:34.471 "subtype": "NVMe" 00:13:34.471 } 00:13:34.471 ] 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:34.472 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:34.472 rmmod nvme_tcp 00:13:34.730 rmmod nvme_fabrics 00:13:34.730 rmmod nvme_keyring 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72793 ']' 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72793 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 72793 ']' 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 72793 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:34.730 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72793 00:13:34.731 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:34.731 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:34.731 killing process with pid 72793 00:13:34.731 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72793' 00:13:34.731 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 72793 00:13:34.731 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 72793 00:13:34.989 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.989 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.989 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.989 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:34.989 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:34.989 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.989 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:34.990 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:35.249 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:35.249 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.249 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:13:35.249 00:13:35.249 real 0m3.166s 00:13:35.249 user 0m7.036s 00:13:35.249 sys 0m1.098s 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:35.249 ************************************ 00:13:35.249 END TEST nvmf_target_discovery 00:13:35.249 ************************************ 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:35.249 13:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.249 ************************************ 00:13:35.249 START TEST nvmf_referrals 00:13:35.249 ************************************ 00:13:35.250 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:35.509 * Looking for test storage... 00:13:35.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.509 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:35.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.510 --rc genhtml_branch_coverage=1 00:13:35.510 --rc genhtml_function_coverage=1 00:13:35.510 --rc genhtml_legend=1 00:13:35.510 --rc geninfo_all_blocks=1 00:13:35.510 --rc geninfo_unexecuted_blocks=1 00:13:35.510 00:13:35.510 ' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:35.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.510 --rc genhtml_branch_coverage=1 00:13:35.510 --rc genhtml_function_coverage=1 00:13:35.510 --rc genhtml_legend=1 00:13:35.510 --rc geninfo_all_blocks=1 00:13:35.510 --rc geninfo_unexecuted_blocks=1 00:13:35.510 00:13:35.510 ' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:35.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.510 --rc genhtml_branch_coverage=1 00:13:35.510 --rc genhtml_function_coverage=1 00:13:35.510 --rc genhtml_legend=1 00:13:35.510 --rc geninfo_all_blocks=1 00:13:35.510 --rc geninfo_unexecuted_blocks=1 00:13:35.510 00:13:35.510 ' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:35.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.510 --rc genhtml_branch_coverage=1 00:13:35.510 --rc genhtml_function_coverage=1 00:13:35.510 --rc genhtml_legend=1 00:13:35.510 --rc geninfo_all_blocks=1 00:13:35.510 --rc geninfo_unexecuted_blocks=1 00:13:35.510 00:13:35.510 ' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.510 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.510 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.511 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.511 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.511 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:35.770 Cannot find device "nvmf_init_br" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:35.770 Cannot find device "nvmf_init_br2" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:35.770 Cannot find device "nvmf_tgt_br" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.770 Cannot find device "nvmf_tgt_br2" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:35.770 Cannot find device "nvmf_init_br" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:35.770 Cannot find device "nvmf_init_br2" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:35.770 Cannot find device "nvmf_tgt_br" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:35.770 Cannot find device "nvmf_tgt_br2" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:35.770 Cannot find device "nvmf_br" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:35.770 Cannot find device "nvmf_init_if" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:35.770 Cannot find device "nvmf_init_if2" 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.770 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:36.029 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:36.030 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:36.030 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:13:36.030 00:13:36.030 --- 10.0.0.3 ping statistics --- 00:13:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.030 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:36.030 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:36.030 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:13:36.030 00:13:36.030 --- 10.0.0.4 ping statistics --- 00:13:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.030 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:36.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:13:36.030 00:13:36.030 --- 10.0.0.1 ping statistics --- 00:13:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.030 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:36.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:13:36.030 00:13:36.030 --- 10.0.0.2 ping statistics --- 00:13:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.030 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.030 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=73076 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 73076 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 73076 ']' 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:36.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:36.289 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.289 [2024-11-06 13:43:17.033834] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:13:36.289 [2024-11-06 13:43:17.033946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.289 [2024-11-06 13:43:17.187717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.548 [2024-11-06 13:43:17.267756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.548 [2024-11-06 13:43:17.267814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.548 [2024-11-06 13:43:17.267824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.548 [2024-11-06 13:43:17.267833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.548 [2024-11-06 13:43:17.267840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.548 [2024-11-06 13:43:17.269138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.548 [2024-11-06 13:43:17.269342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.548 [2024-11-06 13:43:17.269963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.548 [2024-11-06 13:43:17.269965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.116 [2024-11-06 13:43:17.991557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.116 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.116 [2024-11-06 13:43:18.008064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.116 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.375 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.634 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.635 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:37.894 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:37.895 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:38.154 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:38.154 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:38.154 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:38.154 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:38.154 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:38.154 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.154 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:38.413 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:38.414 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:38.672 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.673 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.673 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:38.673 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:38.673 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.673 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.673 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.673 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:38.673 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:38.931 rmmod nvme_tcp 00:13:38.931 rmmod nvme_fabrics 00:13:38.931 rmmod nvme_keyring 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 73076 ']' 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 73076 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 73076 ']' 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 73076 00:13:38.931 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:13:39.190 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:39.190 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73076 00:13:39.190 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:39.190 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:39.190 killing process with pid 73076 00:13:39.190 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73076' 00:13:39.190 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 73076 00:13:39.190 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 73076 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:39.448 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:13:39.707 00:13:39.707 real 0m4.357s 00:13:39.707 user 0m12.526s 00:13:39.707 sys 0m1.462s 00:13:39.707 ************************************ 00:13:39.707 END TEST nvmf_referrals 00:13:39.707 ************************************ 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.707 ************************************ 00:13:39.707 START TEST nvmf_connect_disconnect 00:13:39.707 ************************************ 00:13:39.707 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:39.978 * Looking for test storage... 00:13:39.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:39.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.978 --rc genhtml_branch_coverage=1 00:13:39.978 --rc genhtml_function_coverage=1 00:13:39.978 --rc genhtml_legend=1 00:13:39.978 --rc geninfo_all_blocks=1 00:13:39.978 --rc geninfo_unexecuted_blocks=1 00:13:39.978 00:13:39.978 ' 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:39.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.978 --rc genhtml_branch_coverage=1 00:13:39.978 --rc genhtml_function_coverage=1 00:13:39.978 --rc genhtml_legend=1 00:13:39.978 --rc geninfo_all_blocks=1 00:13:39.978 --rc geninfo_unexecuted_blocks=1 00:13:39.978 00:13:39.978 ' 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:39.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.978 --rc genhtml_branch_coverage=1 00:13:39.978 --rc genhtml_function_coverage=1 00:13:39.978 --rc genhtml_legend=1 00:13:39.978 --rc geninfo_all_blocks=1 00:13:39.978 --rc geninfo_unexecuted_blocks=1 00:13:39.978 00:13:39.978 ' 00:13:39.978 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:39.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.979 --rc genhtml_branch_coverage=1 00:13:39.979 --rc genhtml_function_coverage=1 00:13:39.979 --rc genhtml_legend=1 00:13:39.979 --rc geninfo_all_blocks=1 00:13:39.979 --rc geninfo_unexecuted_blocks=1 00:13:39.979 00:13:39.979 ' 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:39.979 Cannot find device "nvmf_init_br" 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:39.979 Cannot find device "nvmf_init_br2" 00:13:39.979 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:13:39.980 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:40.238 Cannot find device "nvmf_tgt_br" 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.238 Cannot find device "nvmf_tgt_br2" 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:40.238 Cannot find device "nvmf_init_br" 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:40.238 Cannot find device "nvmf_init_br2" 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:40.238 Cannot find device "nvmf_tgt_br" 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:13:40.238 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:40.238 Cannot find device "nvmf_tgt_br2" 00:13:40.238 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:40.239 Cannot find device "nvmf_br" 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:40.239 Cannot find device "nvmf_init_if" 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:40.239 Cannot find device "nvmf_init_if2" 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:40.239 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:40.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:40.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.178 ms 00:13:40.498 00:13:40.498 --- 10.0.0.3 ping statistics --- 00:13:40.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.498 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:40.498 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:40.498 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:13:40.498 00:13:40.498 --- 10.0.0.4 ping statistics --- 00:13:40.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.498 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:40.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:13:40.498 00:13:40.498 --- 10.0.0.1 ping statistics --- 00:13:40.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.498 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:40.498 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:40.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:13:40.498 00:13:40.498 --- 10.0.0.2 ping statistics --- 00:13:40.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.498 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=73442 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 73442 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 73442 ']' 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:40.757 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 [2024-11-06 13:43:21.542414] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:13:40.757 [2024-11-06 13:43:21.542494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.015 [2024-11-06 13:43:21.696853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.015 [2024-11-06 13:43:21.770952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.015 [2024-11-06 13:43:21.771006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.015 [2024-11-06 13:43:21.771017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.015 [2024-11-06 13:43:21.771025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.015 [2024-11-06 13:43:21.771032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.015 [2024-11-06 13:43:21.772319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.015 [2024-11-06 13:43:21.773040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.015 [2024-11-06 13:43:21.773117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.015 [2024-11-06 13:43:21.773117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.582 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:41.582 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:13:41.582 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.582 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:41.582 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.840 [2024-11-06 13:43:22.531526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.840 [2024-11-06 13:43:22.624195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.840 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:41.841 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:41.841 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:44.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.881 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:53.881 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:53.881 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:53.881 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:53.881 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:53.881 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:53.882 rmmod nvme_tcp 00:13:53.882 rmmod nvme_fabrics 00:13:53.882 rmmod nvme_keyring 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 73442 ']' 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 73442 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 73442 ']' 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 73442 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73442 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73442' 00:13:53.882 killing process with pid 73442 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 73442 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 73442 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:53.882 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.140 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.140 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:13:54.140 ************************************ 00:13:54.140 END TEST nvmf_connect_disconnect 00:13:54.140 ************************************ 00:13:54.140 00:13:54.140 real 0m14.469s 00:13:54.140 user 0m49.962s 00:13:54.140 sys 0m3.409s 00:13:54.140 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.140 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:54.399 13:43:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:54.399 13:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:54.399 13:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.399 13:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.399 ************************************ 00:13:54.399 START TEST nvmf_multitarget 00:13:54.399 ************************************ 00:13:54.399 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:54.399 * Looking for test storage... 00:13:54.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:54.399 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:54.399 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:13:54.399 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.658 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:54.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.659 --rc genhtml_branch_coverage=1 00:13:54.659 --rc genhtml_function_coverage=1 00:13:54.659 --rc genhtml_legend=1 00:13:54.659 --rc geninfo_all_blocks=1 00:13:54.659 --rc geninfo_unexecuted_blocks=1 00:13:54.659 00:13:54.659 ' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:54.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.659 --rc genhtml_branch_coverage=1 00:13:54.659 --rc genhtml_function_coverage=1 00:13:54.659 --rc genhtml_legend=1 00:13:54.659 --rc geninfo_all_blocks=1 00:13:54.659 --rc geninfo_unexecuted_blocks=1 00:13:54.659 00:13:54.659 ' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:54.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.659 --rc genhtml_branch_coverage=1 00:13:54.659 --rc genhtml_function_coverage=1 00:13:54.659 --rc genhtml_legend=1 00:13:54.659 --rc geninfo_all_blocks=1 00:13:54.659 --rc geninfo_unexecuted_blocks=1 00:13:54.659 00:13:54.659 ' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:54.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.659 --rc genhtml_branch_coverage=1 00:13:54.659 --rc genhtml_function_coverage=1 00:13:54.659 --rc genhtml_legend=1 00:13:54.659 --rc geninfo_all_blocks=1 00:13:54.659 --rc geninfo_unexecuted_blocks=1 00:13:54.659 00:13:54.659 ' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.659 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:54.659 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:54.660 Cannot find device "nvmf_init_br" 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:54.660 Cannot find device "nvmf_init_br2" 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:54.660 Cannot find device "nvmf_tgt_br" 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:54.660 Cannot find device "nvmf_tgt_br2" 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:54.660 Cannot find device "nvmf_init_br" 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:54.660 Cannot find device "nvmf_init_br2" 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:54.660 Cannot find device "nvmf_tgt_br" 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:54.660 Cannot find device "nvmf_tgt_br2" 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:13:54.660 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:54.918 Cannot find device "nvmf_br" 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:54.918 Cannot find device "nvmf_init_if" 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:54.918 Cannot find device "nvmf_init_if2" 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:54.918 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:55.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:55.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:13:55.177 00:13:55.177 --- 10.0.0.3 ping statistics --- 00:13:55.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.177 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:55.177 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:55.177 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:13:55.177 00:13:55.177 --- 10.0.0.4 ping statistics --- 00:13:55.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.177 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:55.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:13:55.177 00:13:55.177 --- 10.0.0.1 ping statistics --- 00:13:55.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.177 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:55.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:13:55.177 00:13:55.177 --- 10.0.0.2 ping statistics --- 00:13:55.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.177 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:55.177 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73904 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73904 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 73904 ']' 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:55.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:55.177 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.177 [2024-11-06 13:43:36.095933] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:13:55.177 [2024-11-06 13:43:36.096009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.436 [2024-11-06 13:43:36.251334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.436 [2024-11-06 13:43:36.330756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.436 [2024-11-06 13:43:36.330818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.436 [2024-11-06 13:43:36.330829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.436 [2024-11-06 13:43:36.330838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.436 [2024-11-06 13:43:36.330845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.436 [2024-11-06 13:43:36.332193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.436 [2024-11-06 13:43:36.332383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.436 [2024-11-06 13:43:36.332464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.436 [2024-11-06 13:43:36.332469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:56.371 "nvmf_tgt_1" 00:13:56.371 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:56.630 "nvmf_tgt_2" 00:13:56.630 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:56.630 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.888 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:56.888 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:56.888 true 00:13:56.888 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:56.888 true 00:13:56.888 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.888 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:57.212 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:57.212 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:57.212 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:57.212 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:57.212 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.212 rmmod nvme_tcp 00:13:57.212 rmmod nvme_fabrics 00:13:57.212 rmmod nvme_keyring 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73904 ']' 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73904 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 73904 ']' 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 73904 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:13:57.212 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:57.213 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73904 00:13:57.213 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:57.213 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:57.213 killing process with pid 73904 00:13:57.213 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73904' 00:13:57.213 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 73904 00:13:57.213 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 73904 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:57.470 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.728 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.986 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:13:57.986 00:13:57.986 real 0m3.580s 00:13:57.986 user 0m9.438s 00:13:57.986 sys 0m1.165s 00:13:57.986 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:57.986 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:57.986 ************************************ 00:13:57.987 END TEST nvmf_multitarget 00:13:57.987 ************************************ 00:13:57.987 13:43:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:57.987 13:43:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:57.987 13:43:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:57.987 13:43:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.987 ************************************ 00:13:57.987 START TEST nvmf_rpc 00:13:57.987 ************************************ 00:13:57.987 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:57.987 * Looking for test storage... 00:13:57.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:57.987 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:57.987 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:57.987 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.246 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:58.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.246 --rc genhtml_branch_coverage=1 00:13:58.246 --rc genhtml_function_coverage=1 00:13:58.246 --rc genhtml_legend=1 00:13:58.246 --rc geninfo_all_blocks=1 00:13:58.246 --rc geninfo_unexecuted_blocks=1 00:13:58.246 00:13:58.246 ' 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:58.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.246 --rc genhtml_branch_coverage=1 00:13:58.246 --rc genhtml_function_coverage=1 00:13:58.246 --rc genhtml_legend=1 00:13:58.246 --rc geninfo_all_blocks=1 00:13:58.246 --rc geninfo_unexecuted_blocks=1 00:13:58.246 00:13:58.246 ' 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:58.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.246 --rc genhtml_branch_coverage=1 00:13:58.246 --rc genhtml_function_coverage=1 00:13:58.246 --rc genhtml_legend=1 00:13:58.246 --rc geninfo_all_blocks=1 00:13:58.246 --rc geninfo_unexecuted_blocks=1 00:13:58.246 00:13:58.246 ' 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:58.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.246 --rc genhtml_branch_coverage=1 00:13:58.246 --rc genhtml_function_coverage=1 00:13:58.246 --rc genhtml_legend=1 00:13:58.246 --rc geninfo_all_blocks=1 00:13:58.246 --rc geninfo_unexecuted_blocks=1 00:13:58.246 00:13:58.246 ' 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.246 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.247 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:58.247 Cannot find device "nvmf_init_br" 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:58.247 Cannot find device "nvmf_init_br2" 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:58.247 Cannot find device "nvmf_tgt_br" 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:58.247 Cannot find device "nvmf_tgt_br2" 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:58.247 Cannot find device "nvmf_init_br" 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:58.247 Cannot find device "nvmf_init_br2" 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:13:58.247 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:58.506 Cannot find device "nvmf_tgt_br" 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:58.506 Cannot find device "nvmf_tgt_br2" 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:58.506 Cannot find device "nvmf_br" 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:58.506 Cannot find device "nvmf_init_if" 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:58.506 Cannot find device "nvmf_init_if2" 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:58.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:58.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:58.506 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:58.765 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:58.765 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.146 ms 00:13:58.765 00:13:58.765 --- 10.0.0.3 ping statistics --- 00:13:58.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.765 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:58.765 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:58.765 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:13:58.765 00:13:58.765 --- 10.0.0.4 ping statistics --- 00:13:58.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.765 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:58.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:13:58.765 00:13:58.765 --- 10.0.0.1 ping statistics --- 00:13:58.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.765 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:58.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:13:58.765 00:13:58.765 --- 10.0.0.2 ping statistics --- 00:13:58.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.765 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:58.765 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=74196 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 74196 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 74196 ']' 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:59.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:59.023 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.023 [2024-11-06 13:43:39.757798] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:13:59.023 [2024-11-06 13:43:39.757891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.023 [2024-11-06 13:43:39.913222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.280 [2024-11-06 13:43:39.990347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.280 [2024-11-06 13:43:39.990409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.280 [2024-11-06 13:43:39.990419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.280 [2024-11-06 13:43:39.990428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.280 [2024-11-06 13:43:39.990435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.280 [2024-11-06 13:43:39.991741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.280 [2024-11-06 13:43:39.991950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.280 [2024-11-06 13:43:39.992030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.280 [2024-11-06 13:43:39.992034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:59.847 "poll_groups": [ 00:13:59.847 { 00:13:59.847 "admin_qpairs": 0, 00:13:59.847 "completed_nvme_io": 0, 00:13:59.847 "current_admin_qpairs": 0, 00:13:59.847 "current_io_qpairs": 0, 00:13:59.847 "io_qpairs": 0, 00:13:59.847 "name": "nvmf_tgt_poll_group_000", 00:13:59.847 "pending_bdev_io": 0, 00:13:59.847 "transports": [] 00:13:59.847 }, 00:13:59.847 { 00:13:59.847 "admin_qpairs": 0, 00:13:59.847 "completed_nvme_io": 0, 00:13:59.847 "current_admin_qpairs": 0, 00:13:59.847 "current_io_qpairs": 0, 00:13:59.847 "io_qpairs": 0, 00:13:59.847 "name": "nvmf_tgt_poll_group_001", 00:13:59.847 "pending_bdev_io": 0, 00:13:59.847 "transports": [] 00:13:59.847 }, 00:13:59.847 { 00:13:59.847 "admin_qpairs": 0, 00:13:59.847 "completed_nvme_io": 0, 00:13:59.847 "current_admin_qpairs": 0, 00:13:59.847 "current_io_qpairs": 0, 00:13:59.847 "io_qpairs": 0, 00:13:59.847 "name": "nvmf_tgt_poll_group_002", 00:13:59.847 "pending_bdev_io": 0, 00:13:59.847 "transports": [] 00:13:59.847 }, 00:13:59.847 { 00:13:59.847 "admin_qpairs": 0, 00:13:59.847 "completed_nvme_io": 0, 00:13:59.847 "current_admin_qpairs": 0, 00:13:59.847 "current_io_qpairs": 0, 00:13:59.847 "io_qpairs": 0, 00:13:59.847 "name": "nvmf_tgt_poll_group_003", 00:13:59.847 "pending_bdev_io": 0, 00:13:59.847 "transports": [] 00:13:59.847 } 00:13:59.847 ], 00:13:59.847 "tick_rate": 2490000000 00:13:59.847 }' 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:59.847 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.106 [2024-11-06 13:43:40.868169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:00.106 "poll_groups": [ 00:14:00.106 { 00:14:00.106 "admin_qpairs": 0, 00:14:00.106 "completed_nvme_io": 0, 00:14:00.106 "current_admin_qpairs": 0, 00:14:00.106 "current_io_qpairs": 0, 00:14:00.106 "io_qpairs": 0, 00:14:00.106 "name": "nvmf_tgt_poll_group_000", 00:14:00.106 "pending_bdev_io": 0, 00:14:00.106 "transports": [ 00:14:00.106 { 00:14:00.106 "trtype": "TCP" 00:14:00.106 } 00:14:00.106 ] 00:14:00.106 }, 00:14:00.106 { 00:14:00.106 "admin_qpairs": 0, 00:14:00.106 "completed_nvme_io": 0, 00:14:00.106 "current_admin_qpairs": 0, 00:14:00.106 "current_io_qpairs": 0, 00:14:00.106 "io_qpairs": 0, 00:14:00.106 "name": "nvmf_tgt_poll_group_001", 00:14:00.106 "pending_bdev_io": 0, 00:14:00.106 "transports": [ 00:14:00.106 { 00:14:00.106 "trtype": "TCP" 00:14:00.106 } 00:14:00.106 ] 00:14:00.106 }, 00:14:00.106 { 00:14:00.106 "admin_qpairs": 0, 00:14:00.106 "completed_nvme_io": 0, 00:14:00.106 "current_admin_qpairs": 0, 00:14:00.106 "current_io_qpairs": 0, 00:14:00.106 "io_qpairs": 0, 00:14:00.106 "name": "nvmf_tgt_poll_group_002", 00:14:00.106 "pending_bdev_io": 0, 00:14:00.106 "transports": [ 00:14:00.106 { 00:14:00.106 "trtype": "TCP" 00:14:00.106 } 00:14:00.106 ] 00:14:00.106 }, 00:14:00.106 { 00:14:00.106 "admin_qpairs": 0, 00:14:00.106 "completed_nvme_io": 0, 00:14:00.106 "current_admin_qpairs": 0, 00:14:00.106 "current_io_qpairs": 0, 00:14:00.106 "io_qpairs": 0, 00:14:00.106 "name": "nvmf_tgt_poll_group_003", 00:14:00.106 "pending_bdev_io": 0, 00:14:00.106 "transports": [ 00:14:00.106 { 00:14:00.106 "trtype": "TCP" 00:14:00.106 } 00:14:00.106 ] 00:14:00.106 } 00:14:00.106 ], 00:14:00.106 "tick_rate": 2490000000 00:14:00.106 }' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:00.106 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:00.106 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:00.106 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:00.106 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:00.106 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:00.106 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:00.106 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.106 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.365 Malloc1 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.365 [2024-11-06 13:43:41.104422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -a 10.0.0.3 -s 4420 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -a 10.0.0.3 -s 4420 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -a 10.0.0.3 -s 4420 00:14:00.365 [2024-11-06 13:43:41.142914] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce' 00:14:00.365 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:00.365 could not add new controller: failed to write to nvme-fabrics device 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.365 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:00.624 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.624 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:00.624 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.624 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:00.624 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:02.525 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:02.784 [2024-11-06 13:43:43.520571] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce' 00:14:02.784 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:02.784 could not add new controller: failed to write to nvme-fabrics device 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.784 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:03.042 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:03.042 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:03.042 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.042 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:03.042 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:04.943 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:04.943 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:04.943 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.943 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:04.943 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.943 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:04.943 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.202 [2024-11-06 13:43:45.981910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.202 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.202 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.202 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:05.460 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.460 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:05.460 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.460 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:05.460 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:07.361 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:07.361 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:07.361 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.361 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:07.361 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.361 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:07.361 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 [2024-11-06 13:43:48.368306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:07.879 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:07.879 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:07.879 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.879 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:07.879 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.781 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.782 [2024-11-06 13:43:50.706587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.782 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:10.040 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:12.570 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.570 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:12.570 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.570 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:12.570 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.570 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.571 [2024-11-06 13:43:53.069348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:12.571 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.472 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.730 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.730 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.730 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.730 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.730 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.731 [2024-11-06 13:43:55.435683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:14.731 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 [2024-11-06 13:43:57.812313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 [2024-11-06 13:43:57.880272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 [2024-11-06 13:43:57.952238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.265 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 [2024-11-06 13:43:58.024219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 [2024-11-06 13:43:58.096212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:17.266 "poll_groups": [ 00:14:17.266 { 00:14:17.266 "admin_qpairs": 2, 00:14:17.266 "completed_nvme_io": 115, 00:14:17.266 "current_admin_qpairs": 0, 00:14:17.266 "current_io_qpairs": 0, 00:14:17.266 "io_qpairs": 16, 00:14:17.266 "name": "nvmf_tgt_poll_group_000", 00:14:17.266 "pending_bdev_io": 0, 00:14:17.266 "transports": [ 00:14:17.266 { 00:14:17.266 "trtype": "TCP" 00:14:17.266 } 00:14:17.266 ] 00:14:17.266 }, 00:14:17.266 { 00:14:17.266 "admin_qpairs": 3, 00:14:17.266 "completed_nvme_io": 166, 00:14:17.266 "current_admin_qpairs": 0, 00:14:17.266 "current_io_qpairs": 0, 00:14:17.266 "io_qpairs": 17, 00:14:17.266 "name": "nvmf_tgt_poll_group_001", 00:14:17.266 "pending_bdev_io": 0, 00:14:17.266 "transports": [ 00:14:17.266 { 00:14:17.266 "trtype": "TCP" 00:14:17.266 } 00:14:17.266 ] 00:14:17.266 }, 00:14:17.266 { 00:14:17.266 "admin_qpairs": 1, 00:14:17.266 "completed_nvme_io": 70, 00:14:17.266 "current_admin_qpairs": 0, 00:14:17.266 "current_io_qpairs": 0, 00:14:17.266 "io_qpairs": 19, 00:14:17.266 "name": "nvmf_tgt_poll_group_002", 00:14:17.266 "pending_bdev_io": 0, 00:14:17.266 "transports": [ 00:14:17.266 { 00:14:17.266 "trtype": "TCP" 00:14:17.266 } 00:14:17.266 ] 00:14:17.266 }, 00:14:17.266 { 00:14:17.266 "admin_qpairs": 1, 00:14:17.266 "completed_nvme_io": 69, 00:14:17.266 "current_admin_qpairs": 0, 00:14:17.266 "current_io_qpairs": 0, 00:14:17.266 "io_qpairs": 18, 00:14:17.266 "name": "nvmf_tgt_poll_group_003", 00:14:17.266 "pending_bdev_io": 0, 00:14:17.266 "transports": [ 00:14:17.266 { 00:14:17.266 "trtype": "TCP" 00:14:17.266 } 00:14:17.266 ] 00:14:17.266 } 00:14:17.266 ], 00:14:17.266 "tick_rate": 2490000000 00:14:17.266 }' 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:17.266 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.525 rmmod nvme_tcp 00:14:17.525 rmmod nvme_fabrics 00:14:17.525 rmmod nvme_keyring 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 74196 ']' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 74196 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 74196 ']' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 74196 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74196 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:17.525 killing process with pid 74196 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74196' 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 74196 00:14:17.525 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 74196 00:14:17.783 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:17.783 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:17.783 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:17.783 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:17.783 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:17.783 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:17.783 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.041 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.300 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:14:18.300 00:14:18.300 real 0m20.233s 00:14:18.300 user 1m13.629s 00:14:18.300 sys 0m3.472s 00:14:18.300 ************************************ 00:14:18.300 END TEST nvmf_rpc 00:14:18.300 ************************************ 00:14:18.300 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:18.300 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.300 13:43:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:18.300 13:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:18.300 13:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:18.300 13:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.300 ************************************ 00:14:18.300 START TEST nvmf_invalid 00:14:18.300 ************************************ 00:14:18.300 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:18.301 * Looking for test storage... 00:14:18.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:18.301 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:18.301 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:14:18.301 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.560 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:18.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.561 --rc genhtml_branch_coverage=1 00:14:18.561 --rc genhtml_function_coverage=1 00:14:18.561 --rc genhtml_legend=1 00:14:18.561 --rc geninfo_all_blocks=1 00:14:18.561 --rc geninfo_unexecuted_blocks=1 00:14:18.561 00:14:18.561 ' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:18.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.561 --rc genhtml_branch_coverage=1 00:14:18.561 --rc genhtml_function_coverage=1 00:14:18.561 --rc genhtml_legend=1 00:14:18.561 --rc geninfo_all_blocks=1 00:14:18.561 --rc geninfo_unexecuted_blocks=1 00:14:18.561 00:14:18.561 ' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:18.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.561 --rc genhtml_branch_coverage=1 00:14:18.561 --rc genhtml_function_coverage=1 00:14:18.561 --rc genhtml_legend=1 00:14:18.561 --rc geninfo_all_blocks=1 00:14:18.561 --rc geninfo_unexecuted_blocks=1 00:14:18.561 00:14:18.561 ' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:18.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.561 --rc genhtml_branch_coverage=1 00:14:18.561 --rc genhtml_function_coverage=1 00:14:18.561 --rc genhtml_legend=1 00:14:18.561 --rc geninfo_all_blocks=1 00:14:18.561 --rc geninfo_unexecuted_blocks=1 00:14:18.561 00:14:18.561 ' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:18.561 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:18.561 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:18.562 Cannot find device "nvmf_init_br" 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:18.562 Cannot find device "nvmf_init_br2" 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:18.562 Cannot find device "nvmf_tgt_br" 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.562 Cannot find device "nvmf_tgt_br2" 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:18.562 Cannot find device "nvmf_init_br" 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:18.562 Cannot find device "nvmf_init_br2" 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:14:18.562 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:18.821 Cannot find device "nvmf_tgt_br" 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:18.821 Cannot find device "nvmf_tgt_br2" 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:18.821 Cannot find device "nvmf_br" 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:18.821 Cannot find device "nvmf_init_if" 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:18.821 Cannot find device "nvmf_init_if2" 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:18.821 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:19.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:14:19.080 00:14:19.080 --- 10.0.0.3 ping statistics --- 00:14:19.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.080 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:19.080 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:19.080 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:14:19.080 00:14:19.080 --- 10.0.0.4 ping statistics --- 00:14:19.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.080 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:19.080 00:14:19.080 --- 10.0.0.1 ping statistics --- 00:14:19.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.080 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:19.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:14:19.080 00:14:19.080 --- 10.0.0.2 ping statistics --- 00:14:19.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.080 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74769 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74769 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 74769 ']' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:19.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:19.080 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:19.339 [2024-11-06 13:44:00.032509] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:14:19.339 [2024-11-06 13:44:00.033142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.339 [2024-11-06 13:44:00.186403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.339 [2024-11-06 13:44:00.262040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.339 [2024-11-06 13:44:00.262297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.339 [2024-11-06 13:44:00.262492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.339 [2024-11-06 13:44:00.262542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.339 [2024-11-06 13:44:00.262600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.339 [2024-11-06 13:44:00.263997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.339 [2024-11-06 13:44:00.264187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.339 [2024-11-06 13:44:00.264192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.339 [2024-11-06 13:44:00.264090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.323 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:20.323 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:14:20.323 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.323 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:20.323 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:20.323 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.323 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:20.323 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29567 00:14:20.323 [2024-11-06 13:44:01.201756] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:20.323 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/06 13:44:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29567 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:20.323 request: 00:14:20.323 { 00:14:20.323 "method": "nvmf_create_subsystem", 00:14:20.323 "params": { 00:14:20.323 "nqn": "nqn.2016-06.io.spdk:cnode29567", 00:14:20.323 "tgt_name": "foobar" 00:14:20.323 } 00:14:20.323 } 00:14:20.323 Got JSON-RPC error response 00:14:20.323 GoRPCClient: error on JSON-RPC call' 00:14:20.323 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/06 13:44:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29567 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:20.323 request: 00:14:20.323 { 00:14:20.323 "method": "nvmf_create_subsystem", 00:14:20.323 "params": { 00:14:20.323 "nqn": "nqn.2016-06.io.spdk:cnode29567", 00:14:20.323 "tgt_name": "foobar" 00:14:20.323 } 00:14:20.323 } 00:14:20.323 Got JSON-RPC error response 00:14:20.323 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:20.323 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:20.323 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19359 00:14:20.583 [2024-11-06 13:44:01.429636] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19359: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:20.583 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/06 13:44:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19359 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:20.583 request: 00:14:20.583 { 00:14:20.583 "method": "nvmf_create_subsystem", 00:14:20.583 "params": { 00:14:20.583 "nqn": "nqn.2016-06.io.spdk:cnode19359", 00:14:20.583 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:20.583 } 00:14:20.583 } 00:14:20.583 Got JSON-RPC error response 00:14:20.583 GoRPCClient: error on JSON-RPC call' 00:14:20.583 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/06 13:44:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19359 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:20.583 request: 00:14:20.583 { 00:14:20.583 "method": "nvmf_create_subsystem", 00:14:20.583 "params": { 00:14:20.583 "nqn": "nqn.2016-06.io.spdk:cnode19359", 00:14:20.583 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:20.583 } 00:14:20.583 } 00:14:20.583 Got JSON-RPC error response 00:14:20.583 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:20.583 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:20.583 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29269 00:14:20.843 [2024-11-06 13:44:01.661447] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29269: invalid model number 'SPDK_Controller' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/06 13:44:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29269], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:20.843 request: 00:14:20.843 { 00:14:20.843 "method": "nvmf_create_subsystem", 00:14:20.843 "params": { 00:14:20.843 "nqn": "nqn.2016-06.io.spdk:cnode29269", 00:14:20.843 "model_number": "SPDK_Controller\u001f" 00:14:20.843 } 00:14:20.843 } 00:14:20.843 Got JSON-RPC error response 00:14:20.843 GoRPCClient: error on JSON-RPC call' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/06 13:44:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode29269], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:20.843 request: 00:14:20.843 { 00:14:20.843 "method": "nvmf_create_subsystem", 00:14:20.843 "params": { 00:14:20.843 "nqn": "nqn.2016-06.io.spdk:cnode29269", 00:14:20.843 "model_number": "SPDK_Controller\u001f" 00:14:20.843 } 00:14:20.843 } 00:14:20.843 Got JSON-RPC error response 00:14:20.843 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:20.843 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ i == \- ]] 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'iEA=-RU\q+L7E/&P?za%9' 00:14:21.103 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'iEA=-RU\q+L7E/&P?za%9' nqn.2016-06.io.spdk:cnode23187 00:14:21.363 [2024-11-06 13:44:02.089140] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23187: invalid serial number 'iEA=-RU\q+L7E/&P?za%9' 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/06 13:44:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23187 serial_number:iEA=-RU\q+L7E/&P?za%9], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN iEA=-RU\q+L7E/&P?za%9 00:14:21.363 request: 00:14:21.363 { 00:14:21.363 "method": "nvmf_create_subsystem", 00:14:21.363 "params": { 00:14:21.363 "nqn": "nqn.2016-06.io.spdk:cnode23187", 00:14:21.363 "serial_number": "iEA=-RU\\q+L7E/&P?za%9" 00:14:21.363 } 00:14:21.363 } 00:14:21.363 Got JSON-RPC error response 00:14:21.363 GoRPCClient: error on JSON-RPC call' 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/06 13:44:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23187 serial_number:iEA=-RU\q+L7E/&P?za%9], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN iEA=-RU\q+L7E/&P?za%9 00:14:21.363 request: 00:14:21.363 { 00:14:21.363 "method": "nvmf_create_subsystem", 00:14:21.363 "params": { 00:14:21.363 "nqn": "nqn.2016-06.io.spdk:cnode23187", 00:14:21.363 "serial_number": "iEA=-RU\\q+L7E/&P?za%9" 00:14:21.363 } 00:14:21.363 } 00:14:21.363 Got JSON-RPC error response 00:14:21.363 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:21.363 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.364 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.625 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '&@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj' 00:14:21.626 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '&@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj' nqn.2016-06.io.spdk:cnode21646 00:14:21.885 [2024-11-06 13:44:02.692769] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21646: invalid model number '&@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj' 00:14:21.885 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/06 13:44:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:&@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj nqn:nqn.2016-06.io.spdk:cnode21646], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN &@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj 00:14:21.885 request: 00:14:21.885 { 00:14:21.885 "method": "nvmf_create_subsystem", 00:14:21.885 "params": { 00:14:21.885 "nqn": "nqn.2016-06.io.spdk:cnode21646", 00:14:21.885 "model_number": "&@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj" 00:14:21.885 } 00:14:21.885 } 00:14:21.885 Got JSON-RPC error response 00:14:21.885 GoRPCClient: error on JSON-RPC call' 00:14:21.885 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/06 13:44:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:&@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj nqn:nqn.2016-06.io.spdk:cnode21646], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN &@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj 00:14:21.885 request: 00:14:21.885 { 00:14:21.885 "method": "nvmf_create_subsystem", 00:14:21.885 "params": { 00:14:21.885 "nqn": "nqn.2016-06.io.spdk:cnode21646", 00:14:21.885 "model_number": "&@tbBby@0#`5_{%t,-[^yAF.AbQ&xM >fHMlb ybj" 00:14:21.885 } 00:14:21.885 } 00:14:21.885 Got JSON-RPC error response 00:14:21.885 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:21.885 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:22.144 [2024-11-06 13:44:02.912644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.144 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:22.402 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:22.402 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:22.402 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:22.402 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:22.402 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:22.660 [2024-11-06 13:44:03.408305] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:22.660 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/11/06 13:44:03 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:14:22.660 request: 00:14:22.660 { 00:14:22.660 "method": "nvmf_subsystem_remove_listener", 00:14:22.660 "params": { 00:14:22.660 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:22.660 "listen_address": { 00:14:22.660 "trtype": "tcp", 00:14:22.660 "traddr": "", 00:14:22.660 "trsvcid": "4421" 00:14:22.660 } 00:14:22.660 } 00:14:22.660 } 00:14:22.660 Got JSON-RPC error response 00:14:22.660 GoRPCClient: error on JSON-RPC call' 00:14:22.660 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/11/06 13:44:03 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:14:22.660 request: 00:14:22.660 { 00:14:22.660 "method": "nvmf_subsystem_remove_listener", 00:14:22.660 "params": { 00:14:22.660 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:22.660 "listen_address": { 00:14:22.660 "trtype": "tcp", 00:14:22.660 "traddr": "", 00:14:22.660 "trsvcid": "4421" 00:14:22.660 } 00:14:22.660 } 00:14:22.660 } 00:14:22.660 Got JSON-RPC error response 00:14:22.660 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:22.660 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20577 -i 0 00:14:22.918 [2024-11-06 13:44:03.668126] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20577: invalid cntlid range [0-65519] 00:14:22.919 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/11/06 13:44:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20577], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:14:22.919 request: 00:14:22.919 { 00:14:22.919 "method": "nvmf_create_subsystem", 00:14:22.919 "params": { 00:14:22.919 "nqn": "nqn.2016-06.io.spdk:cnode20577", 00:14:22.919 "min_cntlid": 0 00:14:22.919 } 00:14:22.919 } 00:14:22.919 Got JSON-RPC error response 00:14:22.919 GoRPCClient: error on JSON-RPC call' 00:14:22.919 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/11/06 13:44:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20577], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:14:22.919 request: 00:14:22.919 { 00:14:22.919 "method": "nvmf_create_subsystem", 00:14:22.919 "params": { 00:14:22.919 "nqn": "nqn.2016-06.io.spdk:cnode20577", 00:14:22.919 "min_cntlid": 0 00:14:22.919 } 00:14:22.919 } 00:14:22.919 Got JSON-RPC error response 00:14:22.919 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:22.919 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24642 -i 65520 00:14:23.177 [2024-11-06 13:44:03.943944] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24642: invalid cntlid range [65520-65519] 00:14:23.177 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/11/06 13:44:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24642], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:14:23.177 request: 00:14:23.177 { 00:14:23.177 "method": "nvmf_create_subsystem", 00:14:23.177 "params": { 00:14:23.177 "nqn": "nqn.2016-06.io.spdk:cnode24642", 00:14:23.177 "min_cntlid": 65520 00:14:23.177 } 00:14:23.177 } 00:14:23.177 Got JSON-RPC error response 00:14:23.177 GoRPCClient: error on JSON-RPC call' 00:14:23.177 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/11/06 13:44:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24642], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:14:23.177 request: 00:14:23.177 { 00:14:23.177 "method": "nvmf_create_subsystem", 00:14:23.177 "params": { 00:14:23.177 "nqn": "nqn.2016-06.io.spdk:cnode24642", 00:14:23.177 "min_cntlid": 65520 00:14:23.177 } 00:14:23.177 } 00:14:23.177 Got JSON-RPC error response 00:14:23.177 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:23.177 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2499 -I 0 00:14:23.436 [2024-11-06 13:44:04.183778] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2499: invalid cntlid range [1-0] 00:14:23.436 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/11/06 13:44:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2499], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:14:23.436 request: 00:14:23.436 { 00:14:23.436 "method": "nvmf_create_subsystem", 00:14:23.436 "params": { 00:14:23.436 "nqn": "nqn.2016-06.io.spdk:cnode2499", 00:14:23.436 "max_cntlid": 0 00:14:23.436 } 00:14:23.436 } 00:14:23.436 Got JSON-RPC error response 00:14:23.436 GoRPCClient: error on JSON-RPC call' 00:14:23.436 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/11/06 13:44:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2499], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:14:23.436 request: 00:14:23.436 { 00:14:23.436 "method": "nvmf_create_subsystem", 00:14:23.436 "params": { 00:14:23.436 "nqn": "nqn.2016-06.io.spdk:cnode2499", 00:14:23.436 "max_cntlid": 0 00:14:23.436 } 00:14:23.436 } 00:14:23.436 Got JSON-RPC error response 00:14:23.436 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:23.436 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12357 -I 65520 00:14:23.732 [2024-11-06 13:44:04.419631] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12357: invalid cntlid range [1-65520] 00:14:23.732 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/11/06 13:44:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12357], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:14:23.732 request: 00:14:23.732 { 00:14:23.732 "method": "nvmf_create_subsystem", 00:14:23.732 "params": { 00:14:23.732 "nqn": "nqn.2016-06.io.spdk:cnode12357", 00:14:23.732 "max_cntlid": 65520 00:14:23.732 } 00:14:23.732 } 00:14:23.732 Got JSON-RPC error response 00:14:23.732 GoRPCClient: error on JSON-RPC call' 00:14:23.732 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/11/06 13:44:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12357], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:14:23.732 request: 00:14:23.732 { 00:14:23.732 "method": "nvmf_create_subsystem", 00:14:23.732 "params": { 00:14:23.732 "nqn": "nqn.2016-06.io.spdk:cnode12357", 00:14:23.732 "max_cntlid": 65520 00:14:23.732 } 00:14:23.732 } 00:14:23.732 Got JSON-RPC error response 00:14:23.732 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:23.732 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19874 -i 6 -I 5 00:14:23.732 [2024-11-06 13:44:04.647523] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19874: invalid cntlid range [6-5] 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/11/06 13:44:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode19874], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:14:23.991 request: 00:14:23.991 { 00:14:23.991 "method": "nvmf_create_subsystem", 00:14:23.991 "params": { 00:14:23.991 "nqn": "nqn.2016-06.io.spdk:cnode19874", 00:14:23.991 "min_cntlid": 6, 00:14:23.991 "max_cntlid": 5 00:14:23.991 } 00:14:23.991 } 00:14:23.991 Got JSON-RPC error response 00:14:23.991 GoRPCClient: error on JSON-RPC call' 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/11/06 13:44:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode19874], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:14:23.991 request: 00:14:23.991 { 00:14:23.991 "method": "nvmf_create_subsystem", 00:14:23.991 "params": { 00:14:23.991 "nqn": "nqn.2016-06.io.spdk:cnode19874", 00:14:23.991 "min_cntlid": 6, 00:14:23.991 "max_cntlid": 5 00:14:23.991 } 00:14:23.991 } 00:14:23.991 Got JSON-RPC error response 00:14:23.991 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:23.991 { 00:14:23.991 "name": "foobar", 00:14:23.991 "method": "nvmf_delete_target", 00:14:23.991 "req_id": 1 00:14:23.991 } 00:14:23.991 Got JSON-RPC error response 00:14:23.991 response: 00:14:23.991 { 00:14:23.991 "code": -32602, 00:14:23.991 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:23.991 }' 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:23.991 { 00:14:23.991 "name": "foobar", 00:14:23.991 "method": "nvmf_delete_target", 00:14:23.991 "req_id": 1 00:14:23.991 } 00:14:23.991 Got JSON-RPC error response 00:14:23.991 response: 00:14:23.991 { 00:14:23.991 "code": -32602, 00:14:23.991 "message": "The specified target doesn't exist, cannot delete it." 00:14:23.991 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:23.991 rmmod nvme_tcp 00:14:23.991 rmmod nvme_fabrics 00:14:23.991 rmmod nvme_keyring 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 74769 ']' 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 74769 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 74769 ']' 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 74769 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:23.991 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74769 00:14:24.250 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:24.250 killing process with pid 74769 00:14:24.250 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:24.250 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74769' 00:14:24.251 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 74769 00:14:24.251 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 74769 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:24.510 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:14:24.769 00:14:24.769 real 0m6.416s 00:14:24.769 user 0m22.690s 00:14:24.769 sys 0m1.979s 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:24.769 ************************************ 00:14:24.769 END TEST nvmf_invalid 00:14:24.769 ************************************ 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.769 ************************************ 00:14:24.769 START TEST nvmf_connect_stress 00:14:24.769 ************************************ 00:14:24.769 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:24.769 * Looking for test storage... 00:14:25.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.030 --rc genhtml_branch_coverage=1 00:14:25.030 --rc genhtml_function_coverage=1 00:14:25.030 --rc genhtml_legend=1 00:14:25.030 --rc geninfo_all_blocks=1 00:14:25.030 --rc geninfo_unexecuted_blocks=1 00:14:25.030 00:14:25.030 ' 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.030 --rc genhtml_branch_coverage=1 00:14:25.030 --rc genhtml_function_coverage=1 00:14:25.030 --rc genhtml_legend=1 00:14:25.030 --rc geninfo_all_blocks=1 00:14:25.030 --rc geninfo_unexecuted_blocks=1 00:14:25.030 00:14:25.030 ' 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.030 --rc genhtml_branch_coverage=1 00:14:25.030 --rc genhtml_function_coverage=1 00:14:25.030 --rc genhtml_legend=1 00:14:25.030 --rc geninfo_all_blocks=1 00:14:25.030 --rc geninfo_unexecuted_blocks=1 00:14:25.030 00:14:25.030 ' 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.030 --rc genhtml_branch_coverage=1 00:14:25.030 --rc genhtml_function_coverage=1 00:14:25.030 --rc genhtml_legend=1 00:14:25.030 --rc geninfo_all_blocks=1 00:14:25.030 --rc geninfo_unexecuted_blocks=1 00:14:25.030 00:14:25.030 ' 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.030 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:25.031 Cannot find device "nvmf_init_br" 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:25.031 Cannot find device "nvmf_init_br2" 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:25.031 Cannot find device "nvmf_tgt_br" 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:25.031 Cannot find device "nvmf_tgt_br2" 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:14:25.031 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:25.290 Cannot find device "nvmf_init_br" 00:14:25.290 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:14:25.290 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:25.290 Cannot find device "nvmf_init_br2" 00:14:25.290 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:14:25.290 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:25.290 Cannot find device "nvmf_tgt_br" 00:14:25.290 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:14:25.290 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:25.290 Cannot find device "nvmf_tgt_br2" 00:14:25.290 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:14:25.290 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:25.291 Cannot find device "nvmf_br" 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:25.291 Cannot find device "nvmf_init_if" 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:25.291 Cannot find device "nvmf_init_if2" 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:25.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:25.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:25.291 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:25.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:25.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:14:25.550 00:14:25.550 --- 10.0.0.3 ping statistics --- 00:14:25.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.550 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:25.550 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:25.550 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:14:25.550 00:14:25.550 --- 10.0.0.4 ping statistics --- 00:14:25.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.550 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:25.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:25.550 00:14:25.550 --- 10.0.0.1 ping statistics --- 00:14:25.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.550 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:25.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:14:25.550 00:14:25.550 --- 10.0.0.2 ping statistics --- 00:14:25.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.550 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:25.550 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=75339 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 75339 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 75339 ']' 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:25.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:25.551 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.551 [2024-11-06 13:44:06.474385] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:14:25.551 [2024-11-06 13:44:06.474464] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.810 [2024-11-06 13:44:06.629566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.810 [2024-11-06 13:44:06.690455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.810 [2024-11-06 13:44:06.690708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.810 [2024-11-06 13:44:06.690726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.810 [2024-11-06 13:44:06.690735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.810 [2024-11-06 13:44:06.690742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.810 [2024-11-06 13:44:06.692019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.810 [2024-11-06 13:44:06.692346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.810 [2024-11-06 13:44:06.692347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.748 [2024-11-06 13:44:07.439920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.748 [2024-11-06 13:44:07.468095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.748 NULL1 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75391 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.748 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.749 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.318 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.318 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:27.318 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.318 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.318 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.577 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.577 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:27.577 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.577 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.577 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.836 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.836 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:27.836 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.836 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.836 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.095 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.095 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:28.095 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.095 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.095 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.354 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.354 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:28.354 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.354 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.354 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.919 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.919 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:28.919 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.919 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.919 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.177 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.177 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:29.177 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.177 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.177 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.436 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.436 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:29.436 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.436 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.436 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.695 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.695 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:29.695 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.695 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.695 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.953 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.953 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:29.953 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.953 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.953 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.520 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.520 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:30.520 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.520 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.520 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.779 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.779 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:30.779 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.779 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.780 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.068 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.068 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:31.068 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.068 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.068 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.355 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.355 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:31.355 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.355 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.355 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.621 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.621 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:31.621 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.621 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.621 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.189 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.189 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:32.189 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.189 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.189 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.448 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.448 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:32.448 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.448 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.448 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.706 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.706 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:32.706 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.706 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.706 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.965 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.965 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:32.965 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.965 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.965 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.224 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.224 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:33.224 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.224 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.224 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.792 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.792 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:33.792 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.792 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.792 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.051 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.051 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:34.051 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.051 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.051 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.310 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.310 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:34.310 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.310 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.310 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.570 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.570 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:34.570 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.570 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.570 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.831 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.831 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:34.831 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.831 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.831 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.398 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.398 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:35.398 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.398 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.398 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.657 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.657 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:35.657 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.657 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.657 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.916 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.916 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:35.916 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.916 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.916 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.175 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.175 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:36.175 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.175 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.175 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.434 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.434 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:36.434 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.434 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.434 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.000 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:37.000 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.000 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.000 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75391 00:14:37.259 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75391) - No such process 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75391 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:37.259 rmmod nvme_tcp 00:14:37.259 rmmod nvme_fabrics 00:14:37.259 rmmod nvme_keyring 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 75339 ']' 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 75339 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 75339 ']' 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 75339 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:37.259 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75339 00:14:37.518 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:37.518 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:37.518 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75339' 00:14:37.518 killing process with pid 75339 00:14:37.518 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 75339 00:14:37.518 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 75339 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:37.777 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:37.778 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:37.778 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:37.778 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.778 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:14:38.038 00:14:38.038 real 0m13.214s 00:14:38.038 user 0m41.003s 00:14:38.038 sys 0m5.020s 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.038 ************************************ 00:14:38.038 END TEST nvmf_connect_stress 00:14:38.038 ************************************ 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:38.038 13:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:38.039 13:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:38.039 13:44:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.039 ************************************ 00:14:38.039 START TEST nvmf_fused_ordering 00:14:38.039 ************************************ 00:14:38.039 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:38.299 * Looking for test storage... 00:14:38.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:38.299 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:38.299 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:14:38.299 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:38.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.299 --rc genhtml_branch_coverage=1 00:14:38.299 --rc genhtml_function_coverage=1 00:14:38.299 --rc genhtml_legend=1 00:14:38.299 --rc geninfo_all_blocks=1 00:14:38.299 --rc geninfo_unexecuted_blocks=1 00:14:38.299 00:14:38.299 ' 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:38.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.299 --rc genhtml_branch_coverage=1 00:14:38.299 --rc genhtml_function_coverage=1 00:14:38.299 --rc genhtml_legend=1 00:14:38.299 --rc geninfo_all_blocks=1 00:14:38.299 --rc geninfo_unexecuted_blocks=1 00:14:38.299 00:14:38.299 ' 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:38.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.299 --rc genhtml_branch_coverage=1 00:14:38.299 --rc genhtml_function_coverage=1 00:14:38.299 --rc genhtml_legend=1 00:14:38.299 --rc geninfo_all_blocks=1 00:14:38.299 --rc geninfo_unexecuted_blocks=1 00:14:38.299 00:14:38.299 ' 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:38.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.299 --rc genhtml_branch_coverage=1 00:14:38.299 --rc genhtml_function_coverage=1 00:14:38.299 --rc genhtml_legend=1 00:14:38.299 --rc geninfo_all_blocks=1 00:14:38.299 --rc geninfo_unexecuted_blocks=1 00:14:38.299 00:14:38.299 ' 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:38.299 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:38.300 Cannot find device "nvmf_init_br" 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:38.300 Cannot find device "nvmf_init_br2" 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:38.300 Cannot find device "nvmf_tgt_br" 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.300 Cannot find device "nvmf_tgt_br2" 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:14:38.300 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:38.300 Cannot find device "nvmf_init_br" 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:38.559 Cannot find device "nvmf_init_br2" 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:38.559 Cannot find device "nvmf_tgt_br" 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:38.559 Cannot find device "nvmf_tgt_br2" 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:38.559 Cannot find device "nvmf_br" 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:38.559 Cannot find device "nvmf_init_if" 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:38.559 Cannot find device "nvmf_init_if2" 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:38.559 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:38.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:14:38.829 00:14:38.829 --- 10.0.0.3 ping statistics --- 00:14:38.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.829 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:38.829 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:38.829 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:38.829 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:14:38.829 00:14:38.829 --- 10.0.0.4 ping statistics --- 00:14:38.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.829 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:14:38.830 00:14:38.830 --- 10.0.0.1 ping statistics --- 00:14:38.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.830 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:38.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:14:38.830 00:14:38.830 --- 10.0.0.2 ping statistics --- 00:14:38.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.830 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75769 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75769 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 75769 ']' 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:38.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:38.830 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:39.118 [2024-11-06 13:44:19.780774] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:14:39.118 [2024-11-06 13:44:19.780875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.118 [2024-11-06 13:44:19.933949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.118 [2024-11-06 13:44:20.010872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.118 [2024-11-06 13:44:20.010969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.118 [2024-11-06 13:44:20.010990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.118 [2024-11-06 13:44:20.011000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.118 [2024-11-06 13:44:20.011008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.118 [2024-11-06 13:44:20.011465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.054 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:40.054 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:14:40.054 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:40.055 [2024-11-06 13:44:20.729544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:40.055 [2024-11-06 13:44:20.753671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:40.055 NULL1 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.055 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:40.055 [2024-11-06 13:44:20.813885] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:14:40.055 [2024-11-06 13:44:20.813932] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75825 ] 00:14:40.313 Attached to nqn.2016-06.io.spdk:cnode1 00:14:40.313 Namespace ID: 1 size: 1GB 00:14:40.313 fused_ordering(0) 00:14:40.313 fused_ordering(1) 00:14:40.313 fused_ordering(2) 00:14:40.313 fused_ordering(3) 00:14:40.313 fused_ordering(4) 00:14:40.313 fused_ordering(5) 00:14:40.313 fused_ordering(6) 00:14:40.313 fused_ordering(7) 00:14:40.313 fused_ordering(8) 00:14:40.313 fused_ordering(9) 00:14:40.313 fused_ordering(10) 00:14:40.313 fused_ordering(11) 00:14:40.313 fused_ordering(12) 00:14:40.313 fused_ordering(13) 00:14:40.313 fused_ordering(14) 00:14:40.313 fused_ordering(15) 00:14:40.313 fused_ordering(16) 00:14:40.313 fused_ordering(17) 00:14:40.313 fused_ordering(18) 00:14:40.313 fused_ordering(19) 00:14:40.313 fused_ordering(20) 00:14:40.313 fused_ordering(21) 00:14:40.313 fused_ordering(22) 00:14:40.313 fused_ordering(23) 00:14:40.313 fused_ordering(24) 00:14:40.313 fused_ordering(25) 00:14:40.313 fused_ordering(26) 00:14:40.313 fused_ordering(27) 00:14:40.313 fused_ordering(28) 00:14:40.313 fused_ordering(29) 00:14:40.313 fused_ordering(30) 00:14:40.313 fused_ordering(31) 00:14:40.313 fused_ordering(32) 00:14:40.313 fused_ordering(33) 00:14:40.313 fused_ordering(34) 00:14:40.313 fused_ordering(35) 00:14:40.313 fused_ordering(36) 00:14:40.313 fused_ordering(37) 00:14:40.313 fused_ordering(38) 00:14:40.313 fused_ordering(39) 00:14:40.313 fused_ordering(40) 00:14:40.313 fused_ordering(41) 00:14:40.313 fused_ordering(42) 00:14:40.313 fused_ordering(43) 00:14:40.313 fused_ordering(44) 00:14:40.313 fused_ordering(45) 00:14:40.313 fused_ordering(46) 00:14:40.313 fused_ordering(47) 00:14:40.313 fused_ordering(48) 00:14:40.313 fused_ordering(49) 00:14:40.313 fused_ordering(50) 00:14:40.313 fused_ordering(51) 00:14:40.313 fused_ordering(52) 00:14:40.313 fused_ordering(53) 00:14:40.313 fused_ordering(54) 00:14:40.313 fused_ordering(55) 00:14:40.313 fused_ordering(56) 00:14:40.313 fused_ordering(57) 00:14:40.313 fused_ordering(58) 00:14:40.313 fused_ordering(59) 00:14:40.313 fused_ordering(60) 00:14:40.313 fused_ordering(61) 00:14:40.313 fused_ordering(62) 00:14:40.313 fused_ordering(63) 00:14:40.313 fused_ordering(64) 00:14:40.313 fused_ordering(65) 00:14:40.313 fused_ordering(66) 00:14:40.313 fused_ordering(67) 00:14:40.313 fused_ordering(68) 00:14:40.313 fused_ordering(69) 00:14:40.313 fused_ordering(70) 00:14:40.313 fused_ordering(71) 00:14:40.313 fused_ordering(72) 00:14:40.313 fused_ordering(73) 00:14:40.313 fused_ordering(74) 00:14:40.313 fused_ordering(75) 00:14:40.313 fused_ordering(76) 00:14:40.313 fused_ordering(77) 00:14:40.313 fused_ordering(78) 00:14:40.313 fused_ordering(79) 00:14:40.313 fused_ordering(80) 00:14:40.313 fused_ordering(81) 00:14:40.313 fused_ordering(82) 00:14:40.313 fused_ordering(83) 00:14:40.313 fused_ordering(84) 00:14:40.313 fused_ordering(85) 00:14:40.313 fused_ordering(86) 00:14:40.313 fused_ordering(87) 00:14:40.313 fused_ordering(88) 00:14:40.313 fused_ordering(89) 00:14:40.313 fused_ordering(90) 00:14:40.313 fused_ordering(91) 00:14:40.313 fused_ordering(92) 00:14:40.313 fused_ordering(93) 00:14:40.313 fused_ordering(94) 00:14:40.313 fused_ordering(95) 00:14:40.313 fused_ordering(96) 00:14:40.313 fused_ordering(97) 00:14:40.313 fused_ordering(98) 00:14:40.313 fused_ordering(99) 00:14:40.313 fused_ordering(100) 00:14:40.313 fused_ordering(101) 00:14:40.313 fused_ordering(102) 00:14:40.313 fused_ordering(103) 00:14:40.313 fused_ordering(104) 00:14:40.313 fused_ordering(105) 00:14:40.313 fused_ordering(106) 00:14:40.313 fused_ordering(107) 00:14:40.314 fused_ordering(108) 00:14:40.314 fused_ordering(109) 00:14:40.314 fused_ordering(110) 00:14:40.314 fused_ordering(111) 00:14:40.314 fused_ordering(112) 00:14:40.314 fused_ordering(113) 00:14:40.314 fused_ordering(114) 00:14:40.314 fused_ordering(115) 00:14:40.314 fused_ordering(116) 00:14:40.314 fused_ordering(117) 00:14:40.314 fused_ordering(118) 00:14:40.314 fused_ordering(119) 00:14:40.314 fused_ordering(120) 00:14:40.314 fused_ordering(121) 00:14:40.314 fused_ordering(122) 00:14:40.314 fused_ordering(123) 00:14:40.314 fused_ordering(124) 00:14:40.314 fused_ordering(125) 00:14:40.314 fused_ordering(126) 00:14:40.314 fused_ordering(127) 00:14:40.314 fused_ordering(128) 00:14:40.314 fused_ordering(129) 00:14:40.314 fused_ordering(130) 00:14:40.314 fused_ordering(131) 00:14:40.314 fused_ordering(132) 00:14:40.314 fused_ordering(133) 00:14:40.314 fused_ordering(134) 00:14:40.314 fused_ordering(135) 00:14:40.314 fused_ordering(136) 00:14:40.314 fused_ordering(137) 00:14:40.314 fused_ordering(138) 00:14:40.314 fused_ordering(139) 00:14:40.314 fused_ordering(140) 00:14:40.314 fused_ordering(141) 00:14:40.314 fused_ordering(142) 00:14:40.314 fused_ordering(143) 00:14:40.314 fused_ordering(144) 00:14:40.314 fused_ordering(145) 00:14:40.314 fused_ordering(146) 00:14:40.314 fused_ordering(147) 00:14:40.314 fused_ordering(148) 00:14:40.314 fused_ordering(149) 00:14:40.314 fused_ordering(150) 00:14:40.314 fused_ordering(151) 00:14:40.314 fused_ordering(152) 00:14:40.314 fused_ordering(153) 00:14:40.314 fused_ordering(154) 00:14:40.314 fused_ordering(155) 00:14:40.314 fused_ordering(156) 00:14:40.314 fused_ordering(157) 00:14:40.314 fused_ordering(158) 00:14:40.314 fused_ordering(159) 00:14:40.314 fused_ordering(160) 00:14:40.314 fused_ordering(161) 00:14:40.314 fused_ordering(162) 00:14:40.314 fused_ordering(163) 00:14:40.314 fused_ordering(164) 00:14:40.314 fused_ordering(165) 00:14:40.314 fused_ordering(166) 00:14:40.314 fused_ordering(167) 00:14:40.314 fused_ordering(168) 00:14:40.314 fused_ordering(169) 00:14:40.314 fused_ordering(170) 00:14:40.314 fused_ordering(171) 00:14:40.314 fused_ordering(172) 00:14:40.314 fused_ordering(173) 00:14:40.314 fused_ordering(174) 00:14:40.314 fused_ordering(175) 00:14:40.314 fused_ordering(176) 00:14:40.314 fused_ordering(177) 00:14:40.314 fused_ordering(178) 00:14:40.314 fused_ordering(179) 00:14:40.314 fused_ordering(180) 00:14:40.314 fused_ordering(181) 00:14:40.314 fused_ordering(182) 00:14:40.314 fused_ordering(183) 00:14:40.314 fused_ordering(184) 00:14:40.314 fused_ordering(185) 00:14:40.314 fused_ordering(186) 00:14:40.314 fused_ordering(187) 00:14:40.314 fused_ordering(188) 00:14:40.314 fused_ordering(189) 00:14:40.314 fused_ordering(190) 00:14:40.314 fused_ordering(191) 00:14:40.314 fused_ordering(192) 00:14:40.314 fused_ordering(193) 00:14:40.314 fused_ordering(194) 00:14:40.314 fused_ordering(195) 00:14:40.314 fused_ordering(196) 00:14:40.314 fused_ordering(197) 00:14:40.314 fused_ordering(198) 00:14:40.314 fused_ordering(199) 00:14:40.314 fused_ordering(200) 00:14:40.314 fused_ordering(201) 00:14:40.314 fused_ordering(202) 00:14:40.314 fused_ordering(203) 00:14:40.314 fused_ordering(204) 00:14:40.314 fused_ordering(205) 00:14:40.572 fused_ordering(206) 00:14:40.572 fused_ordering(207) 00:14:40.572 fused_ordering(208) 00:14:40.572 fused_ordering(209) 00:14:40.572 fused_ordering(210) 00:14:40.572 fused_ordering(211) 00:14:40.572 fused_ordering(212) 00:14:40.572 fused_ordering(213) 00:14:40.572 fused_ordering(214) 00:14:40.572 fused_ordering(215) 00:14:40.572 fused_ordering(216) 00:14:40.572 fused_ordering(217) 00:14:40.572 fused_ordering(218) 00:14:40.572 fused_ordering(219) 00:14:40.572 fused_ordering(220) 00:14:40.572 fused_ordering(221) 00:14:40.572 fused_ordering(222) 00:14:40.572 fused_ordering(223) 00:14:40.572 fused_ordering(224) 00:14:40.572 fused_ordering(225) 00:14:40.572 fused_ordering(226) 00:14:40.572 fused_ordering(227) 00:14:40.572 fused_ordering(228) 00:14:40.572 fused_ordering(229) 00:14:40.572 fused_ordering(230) 00:14:40.572 fused_ordering(231) 00:14:40.572 fused_ordering(232) 00:14:40.572 fused_ordering(233) 00:14:40.572 fused_ordering(234) 00:14:40.572 fused_ordering(235) 00:14:40.572 fused_ordering(236) 00:14:40.572 fused_ordering(237) 00:14:40.572 fused_ordering(238) 00:14:40.572 fused_ordering(239) 00:14:40.572 fused_ordering(240) 00:14:40.572 fused_ordering(241) 00:14:40.572 fused_ordering(242) 00:14:40.572 fused_ordering(243) 00:14:40.572 fused_ordering(244) 00:14:40.572 fused_ordering(245) 00:14:40.572 fused_ordering(246) 00:14:40.572 fused_ordering(247) 00:14:40.572 fused_ordering(248) 00:14:40.572 fused_ordering(249) 00:14:40.572 fused_ordering(250) 00:14:40.572 fused_ordering(251) 00:14:40.572 fused_ordering(252) 00:14:40.572 fused_ordering(253) 00:14:40.572 fused_ordering(254) 00:14:40.572 fused_ordering(255) 00:14:40.572 fused_ordering(256) 00:14:40.572 fused_ordering(257) 00:14:40.572 fused_ordering(258) 00:14:40.572 fused_ordering(259) 00:14:40.572 fused_ordering(260) 00:14:40.572 fused_ordering(261) 00:14:40.572 fused_ordering(262) 00:14:40.572 fused_ordering(263) 00:14:40.572 fused_ordering(264) 00:14:40.572 fused_ordering(265) 00:14:40.572 fused_ordering(266) 00:14:40.572 fused_ordering(267) 00:14:40.572 fused_ordering(268) 00:14:40.572 fused_ordering(269) 00:14:40.572 fused_ordering(270) 00:14:40.572 fused_ordering(271) 00:14:40.572 fused_ordering(272) 00:14:40.572 fused_ordering(273) 00:14:40.572 fused_ordering(274) 00:14:40.572 fused_ordering(275) 00:14:40.572 fused_ordering(276) 00:14:40.572 fused_ordering(277) 00:14:40.572 fused_ordering(278) 00:14:40.572 fused_ordering(279) 00:14:40.572 fused_ordering(280) 00:14:40.572 fused_ordering(281) 00:14:40.572 fused_ordering(282) 00:14:40.572 fused_ordering(283) 00:14:40.572 fused_ordering(284) 00:14:40.572 fused_ordering(285) 00:14:40.572 fused_ordering(286) 00:14:40.572 fused_ordering(287) 00:14:40.572 fused_ordering(288) 00:14:40.572 fused_ordering(289) 00:14:40.572 fused_ordering(290) 00:14:40.572 fused_ordering(291) 00:14:40.572 fused_ordering(292) 00:14:40.572 fused_ordering(293) 00:14:40.572 fused_ordering(294) 00:14:40.572 fused_ordering(295) 00:14:40.572 fused_ordering(296) 00:14:40.572 fused_ordering(297) 00:14:40.572 fused_ordering(298) 00:14:40.572 fused_ordering(299) 00:14:40.572 fused_ordering(300) 00:14:40.572 fused_ordering(301) 00:14:40.572 fused_ordering(302) 00:14:40.572 fused_ordering(303) 00:14:40.572 fused_ordering(304) 00:14:40.572 fused_ordering(305) 00:14:40.572 fused_ordering(306) 00:14:40.572 fused_ordering(307) 00:14:40.572 fused_ordering(308) 00:14:40.572 fused_ordering(309) 00:14:40.572 fused_ordering(310) 00:14:40.572 fused_ordering(311) 00:14:40.572 fused_ordering(312) 00:14:40.572 fused_ordering(313) 00:14:40.572 fused_ordering(314) 00:14:40.572 fused_ordering(315) 00:14:40.572 fused_ordering(316) 00:14:40.572 fused_ordering(317) 00:14:40.572 fused_ordering(318) 00:14:40.572 fused_ordering(319) 00:14:40.572 fused_ordering(320) 00:14:40.572 fused_ordering(321) 00:14:40.572 fused_ordering(322) 00:14:40.572 fused_ordering(323) 00:14:40.572 fused_ordering(324) 00:14:40.572 fused_ordering(325) 00:14:40.572 fused_ordering(326) 00:14:40.572 fused_ordering(327) 00:14:40.572 fused_ordering(328) 00:14:40.572 fused_ordering(329) 00:14:40.572 fused_ordering(330) 00:14:40.572 fused_ordering(331) 00:14:40.572 fused_ordering(332) 00:14:40.572 fused_ordering(333) 00:14:40.572 fused_ordering(334) 00:14:40.572 fused_ordering(335) 00:14:40.572 fused_ordering(336) 00:14:40.572 fused_ordering(337) 00:14:40.572 fused_ordering(338) 00:14:40.572 fused_ordering(339) 00:14:40.572 fused_ordering(340) 00:14:40.572 fused_ordering(341) 00:14:40.572 fused_ordering(342) 00:14:40.572 fused_ordering(343) 00:14:40.572 fused_ordering(344) 00:14:40.572 fused_ordering(345) 00:14:40.572 fused_ordering(346) 00:14:40.572 fused_ordering(347) 00:14:40.572 fused_ordering(348) 00:14:40.572 fused_ordering(349) 00:14:40.572 fused_ordering(350) 00:14:40.572 fused_ordering(351) 00:14:40.572 fused_ordering(352) 00:14:40.572 fused_ordering(353) 00:14:40.572 fused_ordering(354) 00:14:40.572 fused_ordering(355) 00:14:40.572 fused_ordering(356) 00:14:40.572 fused_ordering(357) 00:14:40.572 fused_ordering(358) 00:14:40.572 fused_ordering(359) 00:14:40.572 fused_ordering(360) 00:14:40.572 fused_ordering(361) 00:14:40.573 fused_ordering(362) 00:14:40.573 fused_ordering(363) 00:14:40.573 fused_ordering(364) 00:14:40.573 fused_ordering(365) 00:14:40.573 fused_ordering(366) 00:14:40.573 fused_ordering(367) 00:14:40.573 fused_ordering(368) 00:14:40.573 fused_ordering(369) 00:14:40.573 fused_ordering(370) 00:14:40.573 fused_ordering(371) 00:14:40.573 fused_ordering(372) 00:14:40.573 fused_ordering(373) 00:14:40.573 fused_ordering(374) 00:14:40.573 fused_ordering(375) 00:14:40.573 fused_ordering(376) 00:14:40.573 fused_ordering(377) 00:14:40.573 fused_ordering(378) 00:14:40.573 fused_ordering(379) 00:14:40.573 fused_ordering(380) 00:14:40.573 fused_ordering(381) 00:14:40.573 fused_ordering(382) 00:14:40.573 fused_ordering(383) 00:14:40.573 fused_ordering(384) 00:14:40.573 fused_ordering(385) 00:14:40.573 fused_ordering(386) 00:14:40.573 fused_ordering(387) 00:14:40.573 fused_ordering(388) 00:14:40.573 fused_ordering(389) 00:14:40.573 fused_ordering(390) 00:14:40.573 fused_ordering(391) 00:14:40.573 fused_ordering(392) 00:14:40.573 fused_ordering(393) 00:14:40.573 fused_ordering(394) 00:14:40.573 fused_ordering(395) 00:14:40.573 fused_ordering(396) 00:14:40.573 fused_ordering(397) 00:14:40.573 fused_ordering(398) 00:14:40.573 fused_ordering(399) 00:14:40.573 fused_ordering(400) 00:14:40.573 fused_ordering(401) 00:14:40.573 fused_ordering(402) 00:14:40.573 fused_ordering(403) 00:14:40.573 fused_ordering(404) 00:14:40.573 fused_ordering(405) 00:14:40.573 fused_ordering(406) 00:14:40.573 fused_ordering(407) 00:14:40.573 fused_ordering(408) 00:14:40.573 fused_ordering(409) 00:14:40.573 fused_ordering(410) 00:14:41.138 fused_ordering(411) 00:14:41.138 fused_ordering(412) 00:14:41.138 fused_ordering(413) 00:14:41.138 fused_ordering(414) 00:14:41.138 fused_ordering(415) 00:14:41.138 fused_ordering(416) 00:14:41.138 fused_ordering(417) 00:14:41.138 fused_ordering(418) 00:14:41.138 fused_ordering(419) 00:14:41.138 fused_ordering(420) 00:14:41.138 fused_ordering(421) 00:14:41.138 fused_ordering(422) 00:14:41.138 fused_ordering(423) 00:14:41.138 fused_ordering(424) 00:14:41.138 fused_ordering(425) 00:14:41.138 fused_ordering(426) 00:14:41.138 fused_ordering(427) 00:14:41.138 fused_ordering(428) 00:14:41.138 fused_ordering(429) 00:14:41.138 fused_ordering(430) 00:14:41.138 fused_ordering(431) 00:14:41.138 fused_ordering(432) 00:14:41.138 fused_ordering(433) 00:14:41.138 fused_ordering(434) 00:14:41.138 fused_ordering(435) 00:14:41.138 fused_ordering(436) 00:14:41.138 fused_ordering(437) 00:14:41.138 fused_ordering(438) 00:14:41.138 fused_ordering(439) 00:14:41.138 fused_ordering(440) 00:14:41.138 fused_ordering(441) 00:14:41.138 fused_ordering(442) 00:14:41.138 fused_ordering(443) 00:14:41.138 fused_ordering(444) 00:14:41.138 fused_ordering(445) 00:14:41.138 fused_ordering(446) 00:14:41.138 fused_ordering(447) 00:14:41.138 fused_ordering(448) 00:14:41.138 fused_ordering(449) 00:14:41.138 fused_ordering(450) 00:14:41.138 fused_ordering(451) 00:14:41.138 fused_ordering(452) 00:14:41.138 fused_ordering(453) 00:14:41.138 fused_ordering(454) 00:14:41.138 fused_ordering(455) 00:14:41.138 fused_ordering(456) 00:14:41.138 fused_ordering(457) 00:14:41.138 fused_ordering(458) 00:14:41.138 fused_ordering(459) 00:14:41.138 fused_ordering(460) 00:14:41.138 fused_ordering(461) 00:14:41.138 fused_ordering(462) 00:14:41.138 fused_ordering(463) 00:14:41.138 fused_ordering(464) 00:14:41.138 fused_ordering(465) 00:14:41.138 fused_ordering(466) 00:14:41.138 fused_ordering(467) 00:14:41.138 fused_ordering(468) 00:14:41.138 fused_ordering(469) 00:14:41.138 fused_ordering(470) 00:14:41.138 fused_ordering(471) 00:14:41.138 fused_ordering(472) 00:14:41.138 fused_ordering(473) 00:14:41.138 fused_ordering(474) 00:14:41.138 fused_ordering(475) 00:14:41.138 fused_ordering(476) 00:14:41.138 fused_ordering(477) 00:14:41.138 fused_ordering(478) 00:14:41.138 fused_ordering(479) 00:14:41.138 fused_ordering(480) 00:14:41.138 fused_ordering(481) 00:14:41.138 fused_ordering(482) 00:14:41.138 fused_ordering(483) 00:14:41.138 fused_ordering(484) 00:14:41.138 fused_ordering(485) 00:14:41.138 fused_ordering(486) 00:14:41.138 fused_ordering(487) 00:14:41.138 fused_ordering(488) 00:14:41.138 fused_ordering(489) 00:14:41.138 fused_ordering(490) 00:14:41.138 fused_ordering(491) 00:14:41.138 fused_ordering(492) 00:14:41.138 fused_ordering(493) 00:14:41.138 fused_ordering(494) 00:14:41.138 fused_ordering(495) 00:14:41.138 fused_ordering(496) 00:14:41.138 fused_ordering(497) 00:14:41.138 fused_ordering(498) 00:14:41.138 fused_ordering(499) 00:14:41.138 fused_ordering(500) 00:14:41.138 fused_ordering(501) 00:14:41.138 fused_ordering(502) 00:14:41.138 fused_ordering(503) 00:14:41.138 fused_ordering(504) 00:14:41.138 fused_ordering(505) 00:14:41.138 fused_ordering(506) 00:14:41.138 fused_ordering(507) 00:14:41.138 fused_ordering(508) 00:14:41.138 fused_ordering(509) 00:14:41.138 fused_ordering(510) 00:14:41.138 fused_ordering(511) 00:14:41.138 fused_ordering(512) 00:14:41.138 fused_ordering(513) 00:14:41.138 fused_ordering(514) 00:14:41.138 fused_ordering(515) 00:14:41.138 fused_ordering(516) 00:14:41.138 fused_ordering(517) 00:14:41.138 fused_ordering(518) 00:14:41.138 fused_ordering(519) 00:14:41.138 fused_ordering(520) 00:14:41.138 fused_ordering(521) 00:14:41.138 fused_ordering(522) 00:14:41.138 fused_ordering(523) 00:14:41.138 fused_ordering(524) 00:14:41.138 fused_ordering(525) 00:14:41.138 fused_ordering(526) 00:14:41.138 fused_ordering(527) 00:14:41.138 fused_ordering(528) 00:14:41.138 fused_ordering(529) 00:14:41.138 fused_ordering(530) 00:14:41.138 fused_ordering(531) 00:14:41.138 fused_ordering(532) 00:14:41.138 fused_ordering(533) 00:14:41.138 fused_ordering(534) 00:14:41.138 fused_ordering(535) 00:14:41.138 fused_ordering(536) 00:14:41.138 fused_ordering(537) 00:14:41.138 fused_ordering(538) 00:14:41.138 fused_ordering(539) 00:14:41.138 fused_ordering(540) 00:14:41.138 fused_ordering(541) 00:14:41.138 fused_ordering(542) 00:14:41.138 fused_ordering(543) 00:14:41.138 fused_ordering(544) 00:14:41.138 fused_ordering(545) 00:14:41.138 fused_ordering(546) 00:14:41.138 fused_ordering(547) 00:14:41.138 fused_ordering(548) 00:14:41.138 fused_ordering(549) 00:14:41.138 fused_ordering(550) 00:14:41.138 fused_ordering(551) 00:14:41.138 fused_ordering(552) 00:14:41.138 fused_ordering(553) 00:14:41.138 fused_ordering(554) 00:14:41.138 fused_ordering(555) 00:14:41.138 fused_ordering(556) 00:14:41.138 fused_ordering(557) 00:14:41.138 fused_ordering(558) 00:14:41.138 fused_ordering(559) 00:14:41.138 fused_ordering(560) 00:14:41.138 fused_ordering(561) 00:14:41.138 fused_ordering(562) 00:14:41.138 fused_ordering(563) 00:14:41.138 fused_ordering(564) 00:14:41.138 fused_ordering(565) 00:14:41.138 fused_ordering(566) 00:14:41.138 fused_ordering(567) 00:14:41.138 fused_ordering(568) 00:14:41.138 fused_ordering(569) 00:14:41.138 fused_ordering(570) 00:14:41.138 fused_ordering(571) 00:14:41.138 fused_ordering(572) 00:14:41.138 fused_ordering(573) 00:14:41.138 fused_ordering(574) 00:14:41.138 fused_ordering(575) 00:14:41.138 fused_ordering(576) 00:14:41.138 fused_ordering(577) 00:14:41.138 fused_ordering(578) 00:14:41.138 fused_ordering(579) 00:14:41.138 fused_ordering(580) 00:14:41.138 fused_ordering(581) 00:14:41.138 fused_ordering(582) 00:14:41.138 fused_ordering(583) 00:14:41.138 fused_ordering(584) 00:14:41.138 fused_ordering(585) 00:14:41.138 fused_ordering(586) 00:14:41.138 fused_ordering(587) 00:14:41.138 fused_ordering(588) 00:14:41.138 fused_ordering(589) 00:14:41.138 fused_ordering(590) 00:14:41.138 fused_ordering(591) 00:14:41.138 fused_ordering(592) 00:14:41.138 fused_ordering(593) 00:14:41.138 fused_ordering(594) 00:14:41.138 fused_ordering(595) 00:14:41.138 fused_ordering(596) 00:14:41.138 fused_ordering(597) 00:14:41.138 fused_ordering(598) 00:14:41.138 fused_ordering(599) 00:14:41.138 fused_ordering(600) 00:14:41.138 fused_ordering(601) 00:14:41.138 fused_ordering(602) 00:14:41.138 fused_ordering(603) 00:14:41.138 fused_ordering(604) 00:14:41.138 fused_ordering(605) 00:14:41.138 fused_ordering(606) 00:14:41.138 fused_ordering(607) 00:14:41.138 fused_ordering(608) 00:14:41.138 fused_ordering(609) 00:14:41.138 fused_ordering(610) 00:14:41.138 fused_ordering(611) 00:14:41.138 fused_ordering(612) 00:14:41.138 fused_ordering(613) 00:14:41.138 fused_ordering(614) 00:14:41.138 fused_ordering(615) 00:14:41.396 fused_ordering(616) 00:14:41.396 fused_ordering(617) 00:14:41.396 fused_ordering(618) 00:14:41.396 fused_ordering(619) 00:14:41.396 fused_ordering(620) 00:14:41.396 fused_ordering(621) 00:14:41.396 fused_ordering(622) 00:14:41.396 fused_ordering(623) 00:14:41.396 fused_ordering(624) 00:14:41.396 fused_ordering(625) 00:14:41.396 fused_ordering(626) 00:14:41.396 fused_ordering(627) 00:14:41.396 fused_ordering(628) 00:14:41.396 fused_ordering(629) 00:14:41.396 fused_ordering(630) 00:14:41.396 fused_ordering(631) 00:14:41.396 fused_ordering(632) 00:14:41.396 fused_ordering(633) 00:14:41.396 fused_ordering(634) 00:14:41.396 fused_ordering(635) 00:14:41.396 fused_ordering(636) 00:14:41.396 fused_ordering(637) 00:14:41.396 fused_ordering(638) 00:14:41.397 fused_ordering(639) 00:14:41.397 fused_ordering(640) 00:14:41.397 fused_ordering(641) 00:14:41.397 fused_ordering(642) 00:14:41.397 fused_ordering(643) 00:14:41.397 fused_ordering(644) 00:14:41.397 fused_ordering(645) 00:14:41.397 fused_ordering(646) 00:14:41.397 fused_ordering(647) 00:14:41.397 fused_ordering(648) 00:14:41.397 fused_ordering(649) 00:14:41.397 fused_ordering(650) 00:14:41.397 fused_ordering(651) 00:14:41.397 fused_ordering(652) 00:14:41.397 fused_ordering(653) 00:14:41.397 fused_ordering(654) 00:14:41.397 fused_ordering(655) 00:14:41.397 fused_ordering(656) 00:14:41.397 fused_ordering(657) 00:14:41.397 fused_ordering(658) 00:14:41.397 fused_ordering(659) 00:14:41.397 fused_ordering(660) 00:14:41.397 fused_ordering(661) 00:14:41.397 fused_ordering(662) 00:14:41.397 fused_ordering(663) 00:14:41.397 fused_ordering(664) 00:14:41.397 fused_ordering(665) 00:14:41.397 fused_ordering(666) 00:14:41.397 fused_ordering(667) 00:14:41.397 fused_ordering(668) 00:14:41.397 fused_ordering(669) 00:14:41.397 fused_ordering(670) 00:14:41.397 fused_ordering(671) 00:14:41.397 fused_ordering(672) 00:14:41.397 fused_ordering(673) 00:14:41.397 fused_ordering(674) 00:14:41.397 fused_ordering(675) 00:14:41.397 fused_ordering(676) 00:14:41.397 fused_ordering(677) 00:14:41.397 fused_ordering(678) 00:14:41.397 fused_ordering(679) 00:14:41.397 fused_ordering(680) 00:14:41.397 fused_ordering(681) 00:14:41.397 fused_ordering(682) 00:14:41.397 fused_ordering(683) 00:14:41.397 fused_ordering(684) 00:14:41.397 fused_ordering(685) 00:14:41.397 fused_ordering(686) 00:14:41.397 fused_ordering(687) 00:14:41.397 fused_ordering(688) 00:14:41.397 fused_ordering(689) 00:14:41.397 fused_ordering(690) 00:14:41.397 fused_ordering(691) 00:14:41.397 fused_ordering(692) 00:14:41.397 fused_ordering(693) 00:14:41.397 fused_ordering(694) 00:14:41.397 fused_ordering(695) 00:14:41.397 fused_ordering(696) 00:14:41.397 fused_ordering(697) 00:14:41.397 fused_ordering(698) 00:14:41.397 fused_ordering(699) 00:14:41.397 fused_ordering(700) 00:14:41.397 fused_ordering(701) 00:14:41.397 fused_ordering(702) 00:14:41.397 fused_ordering(703) 00:14:41.397 fused_ordering(704) 00:14:41.397 fused_ordering(705) 00:14:41.397 fused_ordering(706) 00:14:41.397 fused_ordering(707) 00:14:41.397 fused_ordering(708) 00:14:41.397 fused_ordering(709) 00:14:41.397 fused_ordering(710) 00:14:41.397 fused_ordering(711) 00:14:41.397 fused_ordering(712) 00:14:41.397 fused_ordering(713) 00:14:41.397 fused_ordering(714) 00:14:41.397 fused_ordering(715) 00:14:41.397 fused_ordering(716) 00:14:41.397 fused_ordering(717) 00:14:41.397 fused_ordering(718) 00:14:41.397 fused_ordering(719) 00:14:41.397 fused_ordering(720) 00:14:41.397 fused_ordering(721) 00:14:41.397 fused_ordering(722) 00:14:41.397 fused_ordering(723) 00:14:41.397 fused_ordering(724) 00:14:41.397 fused_ordering(725) 00:14:41.397 fused_ordering(726) 00:14:41.397 fused_ordering(727) 00:14:41.397 fused_ordering(728) 00:14:41.397 fused_ordering(729) 00:14:41.397 fused_ordering(730) 00:14:41.397 fused_ordering(731) 00:14:41.397 fused_ordering(732) 00:14:41.397 fused_ordering(733) 00:14:41.397 fused_ordering(734) 00:14:41.397 fused_ordering(735) 00:14:41.397 fused_ordering(736) 00:14:41.397 fused_ordering(737) 00:14:41.397 fused_ordering(738) 00:14:41.397 fused_ordering(739) 00:14:41.397 fused_ordering(740) 00:14:41.397 fused_ordering(741) 00:14:41.397 fused_ordering(742) 00:14:41.397 fused_ordering(743) 00:14:41.397 fused_ordering(744) 00:14:41.397 fused_ordering(745) 00:14:41.397 fused_ordering(746) 00:14:41.397 fused_ordering(747) 00:14:41.397 fused_ordering(748) 00:14:41.397 fused_ordering(749) 00:14:41.397 fused_ordering(750) 00:14:41.397 fused_ordering(751) 00:14:41.397 fused_ordering(752) 00:14:41.397 fused_ordering(753) 00:14:41.397 fused_ordering(754) 00:14:41.397 fused_ordering(755) 00:14:41.397 fused_ordering(756) 00:14:41.397 fused_ordering(757) 00:14:41.397 fused_ordering(758) 00:14:41.397 fused_ordering(759) 00:14:41.397 fused_ordering(760) 00:14:41.397 fused_ordering(761) 00:14:41.397 fused_ordering(762) 00:14:41.397 fused_ordering(763) 00:14:41.397 fused_ordering(764) 00:14:41.397 fused_ordering(765) 00:14:41.397 fused_ordering(766) 00:14:41.397 fused_ordering(767) 00:14:41.397 fused_ordering(768) 00:14:41.397 fused_ordering(769) 00:14:41.397 fused_ordering(770) 00:14:41.397 fused_ordering(771) 00:14:41.397 fused_ordering(772) 00:14:41.397 fused_ordering(773) 00:14:41.397 fused_ordering(774) 00:14:41.397 fused_ordering(775) 00:14:41.397 fused_ordering(776) 00:14:41.397 fused_ordering(777) 00:14:41.397 fused_ordering(778) 00:14:41.397 fused_ordering(779) 00:14:41.397 fused_ordering(780) 00:14:41.397 fused_ordering(781) 00:14:41.397 fused_ordering(782) 00:14:41.397 fused_ordering(783) 00:14:41.397 fused_ordering(784) 00:14:41.397 fused_ordering(785) 00:14:41.397 fused_ordering(786) 00:14:41.397 fused_ordering(787) 00:14:41.397 fused_ordering(788) 00:14:41.397 fused_ordering(789) 00:14:41.397 fused_ordering(790) 00:14:41.397 fused_ordering(791) 00:14:41.397 fused_ordering(792) 00:14:41.397 fused_ordering(793) 00:14:41.397 fused_ordering(794) 00:14:41.397 fused_ordering(795) 00:14:41.397 fused_ordering(796) 00:14:41.397 fused_ordering(797) 00:14:41.397 fused_ordering(798) 00:14:41.397 fused_ordering(799) 00:14:41.397 fused_ordering(800) 00:14:41.397 fused_ordering(801) 00:14:41.397 fused_ordering(802) 00:14:41.397 fused_ordering(803) 00:14:41.397 fused_ordering(804) 00:14:41.397 fused_ordering(805) 00:14:41.397 fused_ordering(806) 00:14:41.397 fused_ordering(807) 00:14:41.397 fused_ordering(808) 00:14:41.397 fused_ordering(809) 00:14:41.397 fused_ordering(810) 00:14:41.397 fused_ordering(811) 00:14:41.397 fused_ordering(812) 00:14:41.397 fused_ordering(813) 00:14:41.397 fused_ordering(814) 00:14:41.397 fused_ordering(815) 00:14:41.397 fused_ordering(816) 00:14:41.397 fused_ordering(817) 00:14:41.397 fused_ordering(818) 00:14:41.397 fused_ordering(819) 00:14:41.397 fused_ordering(820) 00:14:41.962 fused_ordering(821) 00:14:41.962 fused_ordering(822) 00:14:41.962 fused_ordering(823) 00:14:41.962 fused_ordering(824) 00:14:41.962 fused_ordering(825) 00:14:41.962 fused_ordering(826) 00:14:41.962 fused_ordering(827) 00:14:41.962 fused_ordering(828) 00:14:41.962 fused_ordering(829) 00:14:41.962 fused_ordering(830) 00:14:41.962 fused_ordering(831) 00:14:41.962 fused_ordering(832) 00:14:41.962 fused_ordering(833) 00:14:41.962 fused_ordering(834) 00:14:41.962 fused_ordering(835) 00:14:41.962 fused_ordering(836) 00:14:41.962 fused_ordering(837) 00:14:41.962 fused_ordering(838) 00:14:41.962 fused_ordering(839) 00:14:41.962 fused_ordering(840) 00:14:41.962 fused_ordering(841) 00:14:41.962 fused_ordering(842) 00:14:41.962 fused_ordering(843) 00:14:41.962 fused_ordering(844) 00:14:41.962 fused_ordering(845) 00:14:41.962 fused_ordering(846) 00:14:41.962 fused_ordering(847) 00:14:41.962 fused_ordering(848) 00:14:41.962 fused_ordering(849) 00:14:41.962 fused_ordering(850) 00:14:41.962 fused_ordering(851) 00:14:41.962 fused_ordering(852) 00:14:41.962 fused_ordering(853) 00:14:41.962 fused_ordering(854) 00:14:41.962 fused_ordering(855) 00:14:41.962 fused_ordering(856) 00:14:41.962 fused_ordering(857) 00:14:41.962 fused_ordering(858) 00:14:41.962 fused_ordering(859) 00:14:41.962 fused_ordering(860) 00:14:41.962 fused_ordering(861) 00:14:41.962 fused_ordering(862) 00:14:41.962 fused_ordering(863) 00:14:41.962 fused_ordering(864) 00:14:41.962 fused_ordering(865) 00:14:41.962 fused_ordering(866) 00:14:41.962 fused_ordering(867) 00:14:41.962 fused_ordering(868) 00:14:41.962 fused_ordering(869) 00:14:41.963 fused_ordering(870) 00:14:41.963 fused_ordering(871) 00:14:41.963 fused_ordering(872) 00:14:41.963 fused_ordering(873) 00:14:41.963 fused_ordering(874) 00:14:41.963 fused_ordering(875) 00:14:41.963 fused_ordering(876) 00:14:41.963 fused_ordering(877) 00:14:41.963 fused_ordering(878) 00:14:41.963 fused_ordering(879) 00:14:41.963 fused_ordering(880) 00:14:41.963 fused_ordering(881) 00:14:41.963 fused_ordering(882) 00:14:41.963 fused_ordering(883) 00:14:41.963 fused_ordering(884) 00:14:41.963 fused_ordering(885) 00:14:41.963 fused_ordering(886) 00:14:41.963 fused_ordering(887) 00:14:41.963 fused_ordering(888) 00:14:41.963 fused_ordering(889) 00:14:41.963 fused_ordering(890) 00:14:41.963 fused_ordering(891) 00:14:41.963 fused_ordering(892) 00:14:41.963 fused_ordering(893) 00:14:41.963 fused_ordering(894) 00:14:41.963 fused_ordering(895) 00:14:41.963 fused_ordering(896) 00:14:41.963 fused_ordering(897) 00:14:41.963 fused_ordering(898) 00:14:41.963 fused_ordering(899) 00:14:41.963 fused_ordering(900) 00:14:41.963 fused_ordering(901) 00:14:41.963 fused_ordering(902) 00:14:41.963 fused_ordering(903) 00:14:41.963 fused_ordering(904) 00:14:41.963 fused_ordering(905) 00:14:41.963 fused_ordering(906) 00:14:41.963 fused_ordering(907) 00:14:41.963 fused_ordering(908) 00:14:41.963 fused_ordering(909) 00:14:41.963 fused_ordering(910) 00:14:41.963 fused_ordering(911) 00:14:41.963 fused_ordering(912) 00:14:41.963 fused_ordering(913) 00:14:41.963 fused_ordering(914) 00:14:41.963 fused_ordering(915) 00:14:41.963 fused_ordering(916) 00:14:41.963 fused_ordering(917) 00:14:41.963 fused_ordering(918) 00:14:41.963 fused_ordering(919) 00:14:41.963 fused_ordering(920) 00:14:41.963 fused_ordering(921) 00:14:41.963 fused_ordering(922) 00:14:41.963 fused_ordering(923) 00:14:41.963 fused_ordering(924) 00:14:41.963 fused_ordering(925) 00:14:41.963 fused_ordering(926) 00:14:41.963 fused_ordering(927) 00:14:41.963 fused_ordering(928) 00:14:41.963 fused_ordering(929) 00:14:41.963 fused_ordering(930) 00:14:41.963 fused_ordering(931) 00:14:41.963 fused_ordering(932) 00:14:41.963 fused_ordering(933) 00:14:41.963 fused_ordering(934) 00:14:41.963 fused_ordering(935) 00:14:41.963 fused_ordering(936) 00:14:41.963 fused_ordering(937) 00:14:41.963 fused_ordering(938) 00:14:41.963 fused_ordering(939) 00:14:41.963 fused_ordering(940) 00:14:41.963 fused_ordering(941) 00:14:41.963 fused_ordering(942) 00:14:41.963 fused_ordering(943) 00:14:41.963 fused_ordering(944) 00:14:41.963 fused_ordering(945) 00:14:41.963 fused_ordering(946) 00:14:41.963 fused_ordering(947) 00:14:41.963 fused_ordering(948) 00:14:41.963 fused_ordering(949) 00:14:41.963 fused_ordering(950) 00:14:41.963 fused_ordering(951) 00:14:41.963 fused_ordering(952) 00:14:41.963 fused_ordering(953) 00:14:41.963 fused_ordering(954) 00:14:41.963 fused_ordering(955) 00:14:41.963 fused_ordering(956) 00:14:41.963 fused_ordering(957) 00:14:41.963 fused_ordering(958) 00:14:41.963 fused_ordering(959) 00:14:41.963 fused_ordering(960) 00:14:41.963 fused_ordering(961) 00:14:41.963 fused_ordering(962) 00:14:41.963 fused_ordering(963) 00:14:41.963 fused_ordering(964) 00:14:41.963 fused_ordering(965) 00:14:41.963 fused_ordering(966) 00:14:41.963 fused_ordering(967) 00:14:41.963 fused_ordering(968) 00:14:41.963 fused_ordering(969) 00:14:41.963 fused_ordering(970) 00:14:41.963 fused_ordering(971) 00:14:41.963 fused_ordering(972) 00:14:41.963 fused_ordering(973) 00:14:41.963 fused_ordering(974) 00:14:41.963 fused_ordering(975) 00:14:41.963 fused_ordering(976) 00:14:41.963 fused_ordering(977) 00:14:41.963 fused_ordering(978) 00:14:41.963 fused_ordering(979) 00:14:41.963 fused_ordering(980) 00:14:41.963 fused_ordering(981) 00:14:41.963 fused_ordering(982) 00:14:41.963 fused_ordering(983) 00:14:41.963 fused_ordering(984) 00:14:41.963 fused_ordering(985) 00:14:41.963 fused_ordering(986) 00:14:41.963 fused_ordering(987) 00:14:41.963 fused_ordering(988) 00:14:41.963 fused_ordering(989) 00:14:41.963 fused_ordering(990) 00:14:41.963 fused_ordering(991) 00:14:41.963 fused_ordering(992) 00:14:41.963 fused_ordering(993) 00:14:41.963 fused_ordering(994) 00:14:41.963 fused_ordering(995) 00:14:41.963 fused_ordering(996) 00:14:41.963 fused_ordering(997) 00:14:41.963 fused_ordering(998) 00:14:41.963 fused_ordering(999) 00:14:41.963 fused_ordering(1000) 00:14:41.963 fused_ordering(1001) 00:14:41.963 fused_ordering(1002) 00:14:41.963 fused_ordering(1003) 00:14:41.963 fused_ordering(1004) 00:14:41.963 fused_ordering(1005) 00:14:41.963 fused_ordering(1006) 00:14:41.963 fused_ordering(1007) 00:14:41.963 fused_ordering(1008) 00:14:41.963 fused_ordering(1009) 00:14:41.963 fused_ordering(1010) 00:14:41.963 fused_ordering(1011) 00:14:41.963 fused_ordering(1012) 00:14:41.963 fused_ordering(1013) 00:14:41.963 fused_ordering(1014) 00:14:41.963 fused_ordering(1015) 00:14:41.963 fused_ordering(1016) 00:14:41.963 fused_ordering(1017) 00:14:41.963 fused_ordering(1018) 00:14:41.963 fused_ordering(1019) 00:14:41.963 fused_ordering(1020) 00:14:41.963 fused_ordering(1021) 00:14:41.963 fused_ordering(1022) 00:14:41.963 fused_ordering(1023) 00:14:41.963 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:41.963 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:41.963 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:41.963 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:41.963 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:41.963 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:41.963 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:41.963 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:41.963 rmmod nvme_tcp 00:14:41.963 rmmod nvme_fabrics 00:14:41.963 rmmod nvme_keyring 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75769 ']' 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75769 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 75769 ']' 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 75769 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75769 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:42.222 killing process with pid 75769 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75769' 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 75769 00:14:42.222 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 75769 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.482 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:14:42.747 00:14:42.747 real 0m4.622s 00:14:42.747 user 0m4.737s 00:14:42.747 sys 0m1.783s 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:42.747 ************************************ 00:14:42.747 END TEST nvmf_fused_ordering 00:14:42.747 ************************************ 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.747 ************************************ 00:14:42.747 START TEST nvmf_ns_masking 00:14:42.747 ************************************ 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:42.747 * Looking for test storage... 00:14:42.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:42.747 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.007 --rc genhtml_branch_coverage=1 00:14:43.007 --rc genhtml_function_coverage=1 00:14:43.007 --rc genhtml_legend=1 00:14:43.007 --rc geninfo_all_blocks=1 00:14:43.007 --rc geninfo_unexecuted_blocks=1 00:14:43.007 00:14:43.007 ' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.007 --rc genhtml_branch_coverage=1 00:14:43.007 --rc genhtml_function_coverage=1 00:14:43.007 --rc genhtml_legend=1 00:14:43.007 --rc geninfo_all_blocks=1 00:14:43.007 --rc geninfo_unexecuted_blocks=1 00:14:43.007 00:14:43.007 ' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.007 --rc genhtml_branch_coverage=1 00:14:43.007 --rc genhtml_function_coverage=1 00:14:43.007 --rc genhtml_legend=1 00:14:43.007 --rc geninfo_all_blocks=1 00:14:43.007 --rc geninfo_unexecuted_blocks=1 00:14:43.007 00:14:43.007 ' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.007 --rc genhtml_branch_coverage=1 00:14:43.007 --rc genhtml_function_coverage=1 00:14:43.007 --rc genhtml_legend=1 00:14:43.007 --rc geninfo_all_blocks=1 00:14:43.007 --rc geninfo_unexecuted_blocks=1 00:14:43.007 00:14:43.007 ' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.007 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.008 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=658a84f8-0362-4331-9649-f15aca786535 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c2e35c3a-2461-4d11-a6f0-c615fb18268d 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4c2f5f2f-8655-477b-90d4-dd853281732a 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:43.008 Cannot find device "nvmf_init_br" 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:43.008 Cannot find device "nvmf_init_br2" 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:43.008 Cannot find device "nvmf_tgt_br" 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:14:43.008 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.267 Cannot find device "nvmf_tgt_br2" 00:14:43.267 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:14:43.267 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:43.267 Cannot find device "nvmf_init_br" 00:14:43.267 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:14:43.267 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:43.267 Cannot find device "nvmf_init_br2" 00:14:43.267 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:14:43.267 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:43.267 Cannot find device "nvmf_tgt_br" 00:14:43.267 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:14:43.267 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:43.267 Cannot find device "nvmf_tgt_br2" 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:43.267 Cannot find device "nvmf_br" 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:43.267 Cannot find device "nvmf_init_if" 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:43.267 Cannot find device "nvmf_init_if2" 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:43.267 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:43.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:43.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:14:43.527 00:14:43.527 --- 10.0.0.3 ping statistics --- 00:14:43.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.527 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:43.527 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:43.527 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:14:43.527 00:14:43.527 --- 10.0.0.4 ping statistics --- 00:14:43.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.527 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:43.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:14:43.527 00:14:43.527 --- 10.0.0.1 ping statistics --- 00:14:43.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.527 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:43.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:14:43.527 00:14:43.527 --- 10.0.0.2 ping statistics --- 00:14:43.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.527 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=76092 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 76092 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 76092 ']' 00:14:43.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:43.527 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.527 [2024-11-06 13:44:24.456903] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:14:43.527 [2024-11-06 13:44:24.457151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.786 [2024-11-06 13:44:24.595094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.786 [2024-11-06 13:44:24.664925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.786 [2024-11-06 13:44:24.665165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.786 [2024-11-06 13:44:24.665301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.786 [2024-11-06 13:44:24.665349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.786 [2024-11-06 13:44:24.665376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.786 [2024-11-06 13:44:24.665771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.723 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:44.723 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:44.723 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.723 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:44.723 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:44.723 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.723 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:44.983 [2024-11-06 13:44:25.672906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.983 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:44.983 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:44.983 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:45.242 Malloc1 00:14:45.243 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:45.502 Malloc2 00:14:45.502 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:45.761 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:46.031 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:46.031 [2024-11-06 13:44:26.919360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:46.031 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:46.031 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4c2f5f2f-8655-477b-90d4-dd853281732a -a 10.0.0.3 -s 4420 -i 4 00:14:46.319 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.319 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:46.319 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.319 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:46.319 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.221 [ 0]:0x1 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.221 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.479 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a13027aab6a243a6aad350f4f0938b6a 00:14:48.479 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a13027aab6a243a6aad350f4f0938b6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.479 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.738 [ 0]:0x1 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a13027aab6a243a6aad350f4f0938b6a 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a13027aab6a243a6aad350f4f0938b6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.738 [ 1]:0x2 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95074fdd2629495b82b7da3e489fcddc 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95074fdd2629495b82b7da3e489fcddc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.738 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.997 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:49.255 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:49.255 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4c2f5f2f-8655-477b-90d4-dd853281732a -a 10.0.0.3 -s 4420 -i 4 00:14:49.514 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:49.514 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:49.514 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.514 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:49.514 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:49.514 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.417 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.676 [ 0]:0x2 00:14:51.676 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.676 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.676 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95074fdd2629495b82b7da3e489fcddc 00:14:51.676 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95074fdd2629495b82b7da3e489fcddc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.676 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.936 [ 0]:0x1 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a13027aab6a243a6aad350f4f0938b6a 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a13027aab6a243a6aad350f4f0938b6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.936 [ 1]:0x2 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95074fdd2629495b82b7da3e489fcddc 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95074fdd2629495b82b7da3e489fcddc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.936 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:52.194 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:52.194 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:52.194 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:52.194 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:52.195 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.195 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:52.195 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.195 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:52.195 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.195 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:52.195 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.195 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.195 [ 0]:0x2 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95074fdd2629495b82b7da3e489fcddc 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95074fdd2629495b82b7da3e489fcddc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:52.195 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.453 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:52.453 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:52.453 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4c2f5f2f-8655-477b-90d4-dd853281732a -a 10.0.0.3 -s 4420 -i 4 00:14:52.712 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:52.712 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:52.712 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.712 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:52.712 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:52.712 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:54.617 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:54.617 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:54.617 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.617 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:54.617 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.617 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:54.617 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:54.617 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:54.876 [ 0]:0x1 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a13027aab6a243a6aad350f4f0938b6a 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a13027aab6a243a6aad350f4f0938b6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:54.876 [ 1]:0x2 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95074fdd2629495b82b7da3e489fcddc 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95074fdd2629495b82b7da3e489fcddc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.876 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.135 [ 0]:0x2 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95074fdd2629495b82b7da3e489fcddc 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95074fdd2629495b82b7da3e489fcddc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.135 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.135 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.135 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.135 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.135 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:55.135 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:55.395 [2024-11-06 13:44:36.200823] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:55.395 2024/11/06 13:44:36 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:14:55.395 request: 00:14:55.395 { 00:14:55.395 "method": "nvmf_ns_remove_host", 00:14:55.395 "params": { 00:14:55.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.395 "nsid": 2, 00:14:55.395 "host": "nqn.2016-06.io.spdk:host1" 00:14:55.395 } 00:14:55.395 } 00:14:55.395 Got JSON-RPC error response 00:14:55.395 GoRPCClient: error on JSON-RPC call 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.395 [ 0]:0x2 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.395 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95074fdd2629495b82b7da3e489fcddc 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95074fdd2629495b82b7da3e489fcddc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76469 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76469 /var/tmp/host.sock 00:14:55.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 76469 ']' 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:55.654 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:55.654 [2024-11-06 13:44:36.454240] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:14:55.654 [2024-11-06 13:44:36.454316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76469 ] 00:14:55.912 [2024-11-06 13:44:36.609005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.912 [2024-11-06 13:44:36.679064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.480 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:56.480 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:56.480 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.738 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.997 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 658a84f8-0362-4331-9649-f15aca786535 00:14:56.997 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:56.997 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 658A84F8036243319649F15ACA786535 -i 00:14:57.256 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c2e35c3a-2461-4d11-a6f0-c615fb18268d 00:14:57.256 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:57.256 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C2E35C3A24614D11A6F0C615FB18268D -i 00:14:57.515 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:57.776 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:58.034 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:58.034 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:58.293 nvme0n1 00:14:58.293 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:58.293 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:58.552 nvme1n2 00:14:58.552 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:58.552 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:58.552 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:58.552 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:58.552 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:58.811 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:58.811 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:58.811 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:58.811 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:59.070 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 658a84f8-0362-4331-9649-f15aca786535 == \6\5\8\a\8\4\f\8\-\0\3\6\2\-\4\3\3\1\-\9\6\4\9\-\f\1\5\a\c\a\7\8\6\5\3\5 ]] 00:14:59.070 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:59.070 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:59.070 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:59.329 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c2e35c3a-2461-4d11-a6f0-c615fb18268d == \c\2\e\3\5\c\3\a\-\2\4\6\1\-\4\d\1\1\-\a\6\f\0\-\c\6\1\5\f\b\1\8\2\6\8\d ]] 00:14:59.329 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.588 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 658a84f8-0362-4331-9649-f15aca786535 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 658A84F8036243319649F15ACA786535 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 658A84F8036243319649F15ACA786535 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:59.847 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 658A84F8036243319649F15ACA786535 00:14:59.847 [2024-11-06 13:44:40.763144] bdev.c:8469:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:59.847 [2024-11-06 13:44:40.763197] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:59.847 [2024-11-06 13:44:40.763209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.847 2024/11/06 13:44:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid nguid:658A84F8036243319649F15ACA786535 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:59.848 request: 00:14:59.848 { 00:14:59.848 "method": "nvmf_subsystem_add_ns", 00:14:59.848 "params": { 00:14:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.848 "namespace": { 00:14:59.848 "bdev_name": "invalid", 00:14:59.848 "nsid": 1, 00:14:59.848 "nguid": "658A84F8036243319649F15ACA786535", 00:14:59.848 "no_auto_visible": false 00:14:59.848 } 00:14:59.848 } 00:14:59.848 } 00:14:59.848 Got JSON-RPC error response 00:14:59.848 GoRPCClient: error on JSON-RPC call 00:15:00.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:00.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 658a84f8-0362-4331-9649-f15aca786535 00:15:00.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:00.107 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 658A84F8036243319649F15ACA786535 -i 00:15:00.107 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76469 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 76469 ']' 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 76469 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76469 00:15:02.643 killing process with pid 76469 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76469' 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 76469 00:15:02.643 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 76469 00:15:03.210 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.210 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:03.210 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:03.210 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:03.210 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:03.210 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:03.210 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:03.210 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:03.210 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:03.210 rmmod nvme_tcp 00:15:03.470 rmmod nvme_fabrics 00:15:03.470 rmmod nvme_keyring 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 76092 ']' 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 76092 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 76092 ']' 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 76092 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76092 00:15:03.470 killing process with pid 76092 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76092' 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 76092 00:15:03.470 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 76092 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:03.729 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:03.730 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:03.730 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:03.730 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.730 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:03.730 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:15:03.989 00:15:03.989 real 0m21.314s 00:15:03.989 user 0m33.193s 00:15:03.989 sys 0m4.828s 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:03.989 ************************************ 00:15:03.989 END TEST nvmf_ns_masking 00:15:03.989 ************************************ 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:03.989 13:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:04.249 13:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:04.249 13:44:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.249 ************************************ 00:15:04.249 START TEST nvmf_auth_target 00:15:04.249 ************************************ 00:15:04.249 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:04.249 * Looking for test storage... 00:15:04.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:04.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.249 --rc genhtml_branch_coverage=1 00:15:04.249 --rc genhtml_function_coverage=1 00:15:04.249 --rc genhtml_legend=1 00:15:04.249 --rc geninfo_all_blocks=1 00:15:04.249 --rc geninfo_unexecuted_blocks=1 00:15:04.249 00:15:04.249 ' 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:04.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.249 --rc genhtml_branch_coverage=1 00:15:04.249 --rc genhtml_function_coverage=1 00:15:04.249 --rc genhtml_legend=1 00:15:04.249 --rc geninfo_all_blocks=1 00:15:04.249 --rc geninfo_unexecuted_blocks=1 00:15:04.249 00:15:04.249 ' 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:04.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.249 --rc genhtml_branch_coverage=1 00:15:04.249 --rc genhtml_function_coverage=1 00:15:04.249 --rc genhtml_legend=1 00:15:04.249 --rc geninfo_all_blocks=1 00:15:04.249 --rc geninfo_unexecuted_blocks=1 00:15:04.249 00:15:04.249 ' 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:04.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.249 --rc genhtml_branch_coverage=1 00:15:04.249 --rc genhtml_function_coverage=1 00:15:04.249 --rc genhtml_legend=1 00:15:04.249 --rc geninfo_all_blocks=1 00:15:04.249 --rc geninfo_unexecuted_blocks=1 00:15:04.249 00:15:04.249 ' 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.249 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:04.510 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:04.510 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:04.511 Cannot find device "nvmf_init_br" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:04.511 Cannot find device "nvmf_init_br2" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:04.511 Cannot find device "nvmf_tgt_br" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:04.511 Cannot find device "nvmf_tgt_br2" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:04.511 Cannot find device "nvmf_init_br" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:04.511 Cannot find device "nvmf_init_br2" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:04.511 Cannot find device "nvmf_tgt_br" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:04.511 Cannot find device "nvmf_tgt_br2" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:04.511 Cannot find device "nvmf_br" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:04.511 Cannot find device "nvmf_init_if" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:04.511 Cannot find device "nvmf_init_if2" 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:15:04.511 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:04.770 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:04.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:15:04.771 00:15:04.771 --- 10.0.0.3 ping statistics --- 00:15:04.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.771 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:04.771 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:04.771 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:15:04.771 00:15:04.771 --- 10.0.0.4 ping statistics --- 00:15:04.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.771 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:04.771 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:04.771 00:15:04.771 --- 10.0.0.1 ping statistics --- 00:15:04.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.771 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:05.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:15:05.030 00:15:05.030 --- 10.0.0.2 ping statistics --- 00:15:05.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.030 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=76972 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 76972 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 76972 ']' 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:05.030 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=77016 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=329126c10cc0c0b2f766c5838aaeb8b1202a4b3e00d830c4 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.j0i 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 329126c10cc0c0b2f766c5838aaeb8b1202a4b3e00d830c4 0 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 329126c10cc0c0b2f766c5838aaeb8b1202a4b3e00d830c4 0 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=329126c10cc0c0b2f766c5838aaeb8b1202a4b3e00d830c4 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.j0i 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.j0i 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.j0i 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e5de509cc33987fa8463a1a21d56ba683a29f0e98e5bba55dc4b43a0e413a971 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zdK 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e5de509cc33987fa8463a1a21d56ba683a29f0e98e5bba55dc4b43a0e413a971 3 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e5de509cc33987fa8463a1a21d56ba683a29f0e98e5bba55dc4b43a0e413a971 3 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e5de509cc33987fa8463a1a21d56ba683a29f0e98e5bba55dc4b43a0e413a971 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:05.967 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zdK 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zdK 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.zdK 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ac59757edf929acdfbec87ace5ce24ca 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Gp2 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ac59757edf929acdfbec87ace5ce24ca 1 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ac59757edf929acdfbec87ace5ce24ca 1 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ac59757edf929acdfbec87ace5ce24ca 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:06.227 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Gp2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Gp2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Gp2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2070e77a8759fa120fcdcd4eb87cdd268c39f114ca23d120 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.52W 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2070e77a8759fa120fcdcd4eb87cdd268c39f114ca23d120 2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2070e77a8759fa120fcdcd4eb87cdd268c39f114ca23d120 2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2070e77a8759fa120fcdcd4eb87cdd268c39f114ca23d120 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.52W 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.52W 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.52W 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d1971b0c8cd879916384f314c2f91552290840844e5e9157 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gx1 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d1971b0c8cd879916384f314c2f91552290840844e5e9157 2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d1971b0c8cd879916384f314c2f91552290840844e5e9157 2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d1971b0c8cd879916384f314c2f91552290840844e5e9157 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gx1 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gx1 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.gx1 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:06.227 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5e4706a95991486f6a92ed15e5546834 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GjI 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5e4706a95991486f6a92ed15e5546834 1 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5e4706a95991486f6a92ed15e5546834 1 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5e4706a95991486f6a92ed15e5546834 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GjI 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GjI 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.GjI 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=70c9e82adc4014d1c07dd7f446f58b25642942be75c32e39ea646af184e8664d 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3fS 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 70c9e82adc4014d1c07dd7f446f58b25642942be75c32e39ea646af184e8664d 3 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 70c9e82adc4014d1c07dd7f446f58b25642942be75c32e39ea646af184e8664d 3 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=70c9e82adc4014d1c07dd7f446f58b25642942be75c32e39ea646af184e8664d 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3fS 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3fS 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.3fS 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76972 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 76972 ']' 00:15:06.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:06.487 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:06.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 77016 /var/tmp/host.sock 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 77016 ']' 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:06.747 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.j0i 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.j0i 00:15:07.006 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.j0i 00:15:07.265 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.zdK ]] 00:15:07.265 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zdK 00:15:07.265 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.265 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.265 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.265 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zdK 00:15:07.265 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zdK 00:15:07.524 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:07.524 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Gp2 00:15:07.524 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.524 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.524 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.524 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Gp2 00:15:07.524 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Gp2 00:15:07.782 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.52W ]] 00:15:07.782 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.52W 00:15:07.782 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.782 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.782 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.782 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.52W 00:15:07.782 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.52W 00:15:08.041 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:08.041 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gx1 00:15:08.041 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.041 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.041 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.041 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gx1 00:15:08.041 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gx1 00:15:08.300 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.GjI ]] 00:15:08.300 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GjI 00:15:08.300 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.300 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.300 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.300 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GjI 00:15:08.300 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GjI 00:15:08.300 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:08.300 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3fS 00:15:08.300 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.300 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.3fS 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.3fS 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.559 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.818 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.077 00:15:09.077 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.077 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.077 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.335 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.335 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.335 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.335 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.335 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.335 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.335 { 00:15:09.335 "auth": { 00:15:09.335 "dhgroup": "null", 00:15:09.335 "digest": "sha256", 00:15:09.335 "state": "completed" 00:15:09.335 }, 00:15:09.335 "cntlid": 1, 00:15:09.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:09.335 "listen_address": { 00:15:09.335 "adrfam": "IPv4", 00:15:09.335 "traddr": "10.0.0.3", 00:15:09.335 "trsvcid": "4420", 00:15:09.335 "trtype": "TCP" 00:15:09.335 }, 00:15:09.335 "peer_address": { 00:15:09.335 "adrfam": "IPv4", 00:15:09.335 "traddr": "10.0.0.1", 00:15:09.335 "trsvcid": "54770", 00:15:09.335 "trtype": "TCP" 00:15:09.335 }, 00:15:09.335 "qid": 0, 00:15:09.335 "state": "enabled", 00:15:09.335 "thread": "nvmf_tgt_poll_group_000" 00:15:09.335 } 00:15:09.335 ]' 00:15:09.335 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.595 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.595 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.595 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:09.595 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.595 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.595 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.595 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.853 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:09.853 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:13.138 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.138 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:13.138 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.138 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.138 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.138 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.138 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:13.138 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.399 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.658 00:15:13.658 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.658 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.658 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.918 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.918 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.918 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.918 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.918 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.918 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.918 { 00:15:13.918 "auth": { 00:15:13.918 "dhgroup": "null", 00:15:13.918 "digest": "sha256", 00:15:13.918 "state": "completed" 00:15:13.918 }, 00:15:13.918 "cntlid": 3, 00:15:13.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:13.918 "listen_address": { 00:15:13.918 "adrfam": "IPv4", 00:15:13.918 "traddr": "10.0.0.3", 00:15:13.918 "trsvcid": "4420", 00:15:13.918 "trtype": "TCP" 00:15:13.918 }, 00:15:13.918 "peer_address": { 00:15:13.918 "adrfam": "IPv4", 00:15:13.918 "traddr": "10.0.0.1", 00:15:13.918 "trsvcid": "54800", 00:15:13.918 "trtype": "TCP" 00:15:13.918 }, 00:15:13.918 "qid": 0, 00:15:13.918 "state": "enabled", 00:15:13.918 "thread": "nvmf_tgt_poll_group_000" 00:15:13.918 } 00:15:13.918 ]' 00:15:13.918 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.177 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.177 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.178 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:14.178 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.178 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.178 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.178 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.437 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:14.437 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:15.005 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.005 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:15.005 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.005 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.005 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.005 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.005 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.005 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.264 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.522 00:15:15.780 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.780 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.780 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.780 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.780 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.780 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.780 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.780 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.038 { 00:15:16.038 "auth": { 00:15:16.038 "dhgroup": "null", 00:15:16.038 "digest": "sha256", 00:15:16.038 "state": "completed" 00:15:16.038 }, 00:15:16.038 "cntlid": 5, 00:15:16.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:16.038 "listen_address": { 00:15:16.038 "adrfam": "IPv4", 00:15:16.038 "traddr": "10.0.0.3", 00:15:16.038 "trsvcid": "4420", 00:15:16.038 "trtype": "TCP" 00:15:16.038 }, 00:15:16.038 "peer_address": { 00:15:16.038 "adrfam": "IPv4", 00:15:16.038 "traddr": "10.0.0.1", 00:15:16.038 "trsvcid": "54822", 00:15:16.038 "trtype": "TCP" 00:15:16.038 }, 00:15:16.038 "qid": 0, 00:15:16.038 "state": "enabled", 00:15:16.038 "thread": "nvmf_tgt_poll_group_000" 00:15:16.038 } 00:15:16.038 ]' 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.038 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.297 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:16.297 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:16.872 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.872 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:16.872 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.872 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.872 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.872 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.872 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.872 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.131 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.389 00:15:17.389 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.389 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.389 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.648 { 00:15:17.648 "auth": { 00:15:17.648 "dhgroup": "null", 00:15:17.648 "digest": "sha256", 00:15:17.648 "state": "completed" 00:15:17.648 }, 00:15:17.648 "cntlid": 7, 00:15:17.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:17.648 "listen_address": { 00:15:17.648 "adrfam": "IPv4", 00:15:17.648 "traddr": "10.0.0.3", 00:15:17.648 "trsvcid": "4420", 00:15:17.648 "trtype": "TCP" 00:15:17.648 }, 00:15:17.648 "peer_address": { 00:15:17.648 "adrfam": "IPv4", 00:15:17.648 "traddr": "10.0.0.1", 00:15:17.648 "trsvcid": "35298", 00:15:17.648 "trtype": "TCP" 00:15:17.648 }, 00:15:17.648 "qid": 0, 00:15:17.648 "state": "enabled", 00:15:17.648 "thread": "nvmf_tgt_poll_group_000" 00:15:17.648 } 00:15:17.648 ]' 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.648 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.907 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:17.907 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.907 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.907 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.907 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.166 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:18.166 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.734 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.992 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.252 00:15:19.252 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.252 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.252 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.511 { 00:15:19.511 "auth": { 00:15:19.511 "dhgroup": "ffdhe2048", 00:15:19.511 "digest": "sha256", 00:15:19.511 "state": "completed" 00:15:19.511 }, 00:15:19.511 "cntlid": 9, 00:15:19.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:19.511 "listen_address": { 00:15:19.511 "adrfam": "IPv4", 00:15:19.511 "traddr": "10.0.0.3", 00:15:19.511 "trsvcid": "4420", 00:15:19.511 "trtype": "TCP" 00:15:19.511 }, 00:15:19.511 "peer_address": { 00:15:19.511 "adrfam": "IPv4", 00:15:19.511 "traddr": "10.0.0.1", 00:15:19.511 "trsvcid": "35324", 00:15:19.511 "trtype": "TCP" 00:15:19.511 }, 00:15:19.511 "qid": 0, 00:15:19.511 "state": "enabled", 00:15:19.511 "thread": "nvmf_tgt_poll_group_000" 00:15:19.511 } 00:15:19.511 ]' 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.511 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.770 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:19.770 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:20.339 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.339 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:20.339 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.339 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.597 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.597 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.597 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:20.597 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.598 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.856 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.856 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.856 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.856 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.113 00:15:21.113 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.113 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.113 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.371 { 00:15:21.371 "auth": { 00:15:21.371 "dhgroup": "ffdhe2048", 00:15:21.371 "digest": "sha256", 00:15:21.371 "state": "completed" 00:15:21.371 }, 00:15:21.371 "cntlid": 11, 00:15:21.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:21.371 "listen_address": { 00:15:21.371 "adrfam": "IPv4", 00:15:21.371 "traddr": "10.0.0.3", 00:15:21.371 "trsvcid": "4420", 00:15:21.371 "trtype": "TCP" 00:15:21.371 }, 00:15:21.371 "peer_address": { 00:15:21.371 "adrfam": "IPv4", 00:15:21.371 "traddr": "10.0.0.1", 00:15:21.371 "trsvcid": "35368", 00:15:21.371 "trtype": "TCP" 00:15:21.371 }, 00:15:21.371 "qid": 0, 00:15:21.371 "state": "enabled", 00:15:21.371 "thread": "nvmf_tgt_poll_group_000" 00:15:21.371 } 00:15:21.371 ]' 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.371 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.629 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:21.629 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:22.194 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.194 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:22.194 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.194 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.194 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.194 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.194 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:22.194 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:22.452 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:22.452 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.452 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.452 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:22.452 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:22.452 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.452 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.453 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.453 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.453 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.453 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.453 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.453 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.711 00:15:22.970 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.970 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.970 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.273 { 00:15:23.273 "auth": { 00:15:23.273 "dhgroup": "ffdhe2048", 00:15:23.273 "digest": "sha256", 00:15:23.273 "state": "completed" 00:15:23.273 }, 00:15:23.273 "cntlid": 13, 00:15:23.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:23.273 "listen_address": { 00:15:23.273 "adrfam": "IPv4", 00:15:23.273 "traddr": "10.0.0.3", 00:15:23.273 "trsvcid": "4420", 00:15:23.273 "trtype": "TCP" 00:15:23.273 }, 00:15:23.273 "peer_address": { 00:15:23.273 "adrfam": "IPv4", 00:15:23.273 "traddr": "10.0.0.1", 00:15:23.273 "trsvcid": "35404", 00:15:23.273 "trtype": "TCP" 00:15:23.273 }, 00:15:23.273 "qid": 0, 00:15:23.273 "state": "enabled", 00:15:23.273 "thread": "nvmf_tgt_poll_group_000" 00:15:23.273 } 00:15:23.273 ]' 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.273 13:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.273 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.273 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.273 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.273 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.273 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.531 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:23.531 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:24.098 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.098 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:24.098 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.098 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.098 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.098 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.098 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:24.098 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.358 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.616 00:15:24.616 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.616 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.616 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.875 { 00:15:24.875 "auth": { 00:15:24.875 "dhgroup": "ffdhe2048", 00:15:24.875 "digest": "sha256", 00:15:24.875 "state": "completed" 00:15:24.875 }, 00:15:24.875 "cntlid": 15, 00:15:24.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:24.875 "listen_address": { 00:15:24.875 "adrfam": "IPv4", 00:15:24.875 "traddr": "10.0.0.3", 00:15:24.875 "trsvcid": "4420", 00:15:24.875 "trtype": "TCP" 00:15:24.875 }, 00:15:24.875 "peer_address": { 00:15:24.875 "adrfam": "IPv4", 00:15:24.875 "traddr": "10.0.0.1", 00:15:24.875 "trsvcid": "35430", 00:15:24.875 "trtype": "TCP" 00:15:24.875 }, 00:15:24.875 "qid": 0, 00:15:24.875 "state": "enabled", 00:15:24.875 "thread": "nvmf_tgt_poll_group_000" 00:15:24.875 } 00:15:24.875 ]' 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:24.875 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.135 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.135 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.135 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.135 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:25.135 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.071 13:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.330 00:15:26.330 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.330 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.330 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.590 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.590 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.590 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.590 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.590 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.590 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.590 { 00:15:26.590 "auth": { 00:15:26.590 "dhgroup": "ffdhe3072", 00:15:26.590 "digest": "sha256", 00:15:26.590 "state": "completed" 00:15:26.590 }, 00:15:26.591 "cntlid": 17, 00:15:26.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:26.591 "listen_address": { 00:15:26.591 "adrfam": "IPv4", 00:15:26.591 "traddr": "10.0.0.3", 00:15:26.591 "trsvcid": "4420", 00:15:26.591 "trtype": "TCP" 00:15:26.591 }, 00:15:26.591 "peer_address": { 00:15:26.591 "adrfam": "IPv4", 00:15:26.591 "traddr": "10.0.0.1", 00:15:26.591 "trsvcid": "35458", 00:15:26.591 "trtype": "TCP" 00:15:26.591 }, 00:15:26.591 "qid": 0, 00:15:26.591 "state": "enabled", 00:15:26.591 "thread": "nvmf_tgt_poll_group_000" 00:15:26.591 } 00:15:26.591 ]' 00:15:26.591 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.852 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.852 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.852 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.852 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.852 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.852 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.852 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.111 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:27.111 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:27.679 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.679 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:27.679 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.679 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.679 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.679 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.679 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.679 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.939 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.197 00:15:28.197 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.197 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.197 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.457 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.457 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.457 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.457 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.457 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.457 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.457 { 00:15:28.457 "auth": { 00:15:28.457 "dhgroup": "ffdhe3072", 00:15:28.457 "digest": "sha256", 00:15:28.457 "state": "completed" 00:15:28.457 }, 00:15:28.457 "cntlid": 19, 00:15:28.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:28.457 "listen_address": { 00:15:28.457 "adrfam": "IPv4", 00:15:28.457 "traddr": "10.0.0.3", 00:15:28.457 "trsvcid": "4420", 00:15:28.457 "trtype": "TCP" 00:15:28.457 }, 00:15:28.457 "peer_address": { 00:15:28.457 "adrfam": "IPv4", 00:15:28.457 "traddr": "10.0.0.1", 00:15:28.457 "trsvcid": "34804", 00:15:28.457 "trtype": "TCP" 00:15:28.457 }, 00:15:28.457 "qid": 0, 00:15:28.457 "state": "enabled", 00:15:28.457 "thread": "nvmf_tgt_poll_group_000" 00:15:28.457 } 00:15:28.457 ]' 00:15:28.457 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.716 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.716 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.716 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:28.716 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.716 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.716 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.716 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.974 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:28.974 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:29.542 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.542 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:29.542 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.542 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.542 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.542 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.542 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:29.542 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.801 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.060 00:15:30.060 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.060 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.060 13:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.320 { 00:15:30.320 "auth": { 00:15:30.320 "dhgroup": "ffdhe3072", 00:15:30.320 "digest": "sha256", 00:15:30.320 "state": "completed" 00:15:30.320 }, 00:15:30.320 "cntlid": 21, 00:15:30.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:30.320 "listen_address": { 00:15:30.320 "adrfam": "IPv4", 00:15:30.320 "traddr": "10.0.0.3", 00:15:30.320 "trsvcid": "4420", 00:15:30.320 "trtype": "TCP" 00:15:30.320 }, 00:15:30.320 "peer_address": { 00:15:30.320 "adrfam": "IPv4", 00:15:30.320 "traddr": "10.0.0.1", 00:15:30.320 "trsvcid": "34844", 00:15:30.320 "trtype": "TCP" 00:15:30.320 }, 00:15:30.320 "qid": 0, 00:15:30.320 "state": "enabled", 00:15:30.320 "thread": "nvmf_tgt_poll_group_000" 00:15:30.320 } 00:15:30.320 ]' 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.320 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.579 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:30.579 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:31.145 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.405 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.664 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.664 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.664 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.664 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.923 00:15:31.923 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.923 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.923 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.183 { 00:15:32.183 "auth": { 00:15:32.183 "dhgroup": "ffdhe3072", 00:15:32.183 "digest": "sha256", 00:15:32.183 "state": "completed" 00:15:32.183 }, 00:15:32.183 "cntlid": 23, 00:15:32.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:32.183 "listen_address": { 00:15:32.183 "adrfam": "IPv4", 00:15:32.183 "traddr": "10.0.0.3", 00:15:32.183 "trsvcid": "4420", 00:15:32.183 "trtype": "TCP" 00:15:32.183 }, 00:15:32.183 "peer_address": { 00:15:32.183 "adrfam": "IPv4", 00:15:32.183 "traddr": "10.0.0.1", 00:15:32.183 "trsvcid": "34890", 00:15:32.183 "trtype": "TCP" 00:15:32.183 }, 00:15:32.183 "qid": 0, 00:15:32.183 "state": "enabled", 00:15:32.183 "thread": "nvmf_tgt_poll_group_000" 00:15:32.183 } 00:15:32.183 ]' 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.183 13:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.183 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.183 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.183 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.183 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.183 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.442 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:32.442 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:33.009 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.010 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:33.010 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.010 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.010 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.010 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.010 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.010 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.010 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.269 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.836 00:15:33.836 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.836 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.836 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.836 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.836 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.836 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.836 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.094 { 00:15:34.094 "auth": { 00:15:34.094 "dhgroup": "ffdhe4096", 00:15:34.094 "digest": "sha256", 00:15:34.094 "state": "completed" 00:15:34.094 }, 00:15:34.094 "cntlid": 25, 00:15:34.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:34.094 "listen_address": { 00:15:34.094 "adrfam": "IPv4", 00:15:34.094 "traddr": "10.0.0.3", 00:15:34.094 "trsvcid": "4420", 00:15:34.094 "trtype": "TCP" 00:15:34.094 }, 00:15:34.094 "peer_address": { 00:15:34.094 "adrfam": "IPv4", 00:15:34.094 "traddr": "10.0.0.1", 00:15:34.094 "trsvcid": "34932", 00:15:34.094 "trtype": "TCP" 00:15:34.094 }, 00:15:34.094 "qid": 0, 00:15:34.094 "state": "enabled", 00:15:34.094 "thread": "nvmf_tgt_poll_group_000" 00:15:34.094 } 00:15:34.094 ]' 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.094 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.353 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:34.353 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:34.921 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.921 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:34.921 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.921 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.921 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.921 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.921 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.921 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.180 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.438 00:15:35.438 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.438 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.438 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.697 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.697 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.697 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.697 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.697 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.697 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.697 { 00:15:35.697 "auth": { 00:15:35.697 "dhgroup": "ffdhe4096", 00:15:35.697 "digest": "sha256", 00:15:35.697 "state": "completed" 00:15:35.697 }, 00:15:35.697 "cntlid": 27, 00:15:35.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:35.697 "listen_address": { 00:15:35.697 "adrfam": "IPv4", 00:15:35.697 "traddr": "10.0.0.3", 00:15:35.697 "trsvcid": "4420", 00:15:35.697 "trtype": "TCP" 00:15:35.697 }, 00:15:35.697 "peer_address": { 00:15:35.697 "adrfam": "IPv4", 00:15:35.697 "traddr": "10.0.0.1", 00:15:35.697 "trsvcid": "34968", 00:15:35.697 "trtype": "TCP" 00:15:35.697 }, 00:15:35.697 "qid": 0, 00:15:35.697 "state": "enabled", 00:15:35.697 "thread": "nvmf_tgt_poll_group_000" 00:15:35.697 } 00:15:35.697 ]' 00:15:35.697 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.955 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.955 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.955 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.955 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.956 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.956 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.956 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.214 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:36.214 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:36.781 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.781 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:36.781 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.781 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.781 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.781 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.781 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.781 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.040 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.302 00:15:37.302 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.302 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.302 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.562 { 00:15:37.562 "auth": { 00:15:37.562 "dhgroup": "ffdhe4096", 00:15:37.562 "digest": "sha256", 00:15:37.562 "state": "completed" 00:15:37.562 }, 00:15:37.562 "cntlid": 29, 00:15:37.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:37.562 "listen_address": { 00:15:37.562 "adrfam": "IPv4", 00:15:37.562 "traddr": "10.0.0.3", 00:15:37.562 "trsvcid": "4420", 00:15:37.562 "trtype": "TCP" 00:15:37.562 }, 00:15:37.562 "peer_address": { 00:15:37.562 "adrfam": "IPv4", 00:15:37.562 "traddr": "10.0.0.1", 00:15:37.562 "trsvcid": "46514", 00:15:37.562 "trtype": "TCP" 00:15:37.562 }, 00:15:37.562 "qid": 0, 00:15:37.562 "state": "enabled", 00:15:37.562 "thread": "nvmf_tgt_poll_group_000" 00:15:37.562 } 00:15:37.562 ]' 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.562 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.820 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:37.820 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:38.387 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.387 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:38.387 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.387 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.646 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.991 00:15:38.991 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.991 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.991 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.250 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.250 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.250 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.250 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.250 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.250 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.250 { 00:15:39.250 "auth": { 00:15:39.250 "dhgroup": "ffdhe4096", 00:15:39.250 "digest": "sha256", 00:15:39.250 "state": "completed" 00:15:39.250 }, 00:15:39.250 "cntlid": 31, 00:15:39.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:39.250 "listen_address": { 00:15:39.250 "adrfam": "IPv4", 00:15:39.250 "traddr": "10.0.0.3", 00:15:39.250 "trsvcid": "4420", 00:15:39.250 "trtype": "TCP" 00:15:39.250 }, 00:15:39.250 "peer_address": { 00:15:39.250 "adrfam": "IPv4", 00:15:39.250 "traddr": "10.0.0.1", 00:15:39.250 "trsvcid": "46548", 00:15:39.250 "trtype": "TCP" 00:15:39.250 }, 00:15:39.250 "qid": 0, 00:15:39.250 "state": "enabled", 00:15:39.250 "thread": "nvmf_tgt_poll_group_000" 00:15:39.250 } 00:15:39.250 ]' 00:15:39.250 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.508 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.508 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.508 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.508 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.508 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.508 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.508 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.767 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:39.767 13:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.333 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.591 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.159 00:15:41.159 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.159 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.159 13:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.159 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.159 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.159 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.159 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.159 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.159 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.159 { 00:15:41.159 "auth": { 00:15:41.159 "dhgroup": "ffdhe6144", 00:15:41.159 "digest": "sha256", 00:15:41.159 "state": "completed" 00:15:41.159 }, 00:15:41.159 "cntlid": 33, 00:15:41.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:41.159 "listen_address": { 00:15:41.159 "adrfam": "IPv4", 00:15:41.159 "traddr": "10.0.0.3", 00:15:41.159 "trsvcid": "4420", 00:15:41.159 "trtype": "TCP" 00:15:41.159 }, 00:15:41.159 "peer_address": { 00:15:41.159 "adrfam": "IPv4", 00:15:41.159 "traddr": "10.0.0.1", 00:15:41.159 "trsvcid": "46566", 00:15:41.159 "trtype": "TCP" 00:15:41.159 }, 00:15:41.159 "qid": 0, 00:15:41.159 "state": "enabled", 00:15:41.159 "thread": "nvmf_tgt_poll_group_000" 00:15:41.159 } 00:15:41.159 ]' 00:15:41.159 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.418 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.418 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.418 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:41.418 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.418 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.418 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.418 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.676 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:41.676 13:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:42.253 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.253 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:42.253 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.253 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.253 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.253 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.253 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.253 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.510 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.076 00:15:43.076 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.076 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.076 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.076 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.076 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.076 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.076 13:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.076 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.076 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.076 { 00:15:43.076 "auth": { 00:15:43.076 "dhgroup": "ffdhe6144", 00:15:43.076 "digest": "sha256", 00:15:43.076 "state": "completed" 00:15:43.076 }, 00:15:43.076 "cntlid": 35, 00:15:43.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:43.076 "listen_address": { 00:15:43.076 "adrfam": "IPv4", 00:15:43.076 "traddr": "10.0.0.3", 00:15:43.076 "trsvcid": "4420", 00:15:43.076 "trtype": "TCP" 00:15:43.076 }, 00:15:43.076 "peer_address": { 00:15:43.076 "adrfam": "IPv4", 00:15:43.076 "traddr": "10.0.0.1", 00:15:43.076 "trsvcid": "46586", 00:15:43.076 "trtype": "TCP" 00:15:43.076 }, 00:15:43.076 "qid": 0, 00:15:43.076 "state": "enabled", 00:15:43.076 "thread": "nvmf_tgt_poll_group_000" 00:15:43.076 } 00:15:43.076 ]' 00:15:43.076 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.335 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.335 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.335 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.335 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.335 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.335 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.335 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.593 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:43.593 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:44.158 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.159 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:44.159 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.159 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.159 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.159 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.159 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.159 13:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.417 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.675 00:15:44.934 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.934 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.934 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.192 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.192 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.192 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.192 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.193 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.193 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.193 { 00:15:45.193 "auth": { 00:15:45.193 "dhgroup": "ffdhe6144", 00:15:45.193 "digest": "sha256", 00:15:45.193 "state": "completed" 00:15:45.193 }, 00:15:45.193 "cntlid": 37, 00:15:45.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:45.193 "listen_address": { 00:15:45.193 "adrfam": "IPv4", 00:15:45.193 "traddr": "10.0.0.3", 00:15:45.193 "trsvcid": "4420", 00:15:45.193 "trtype": "TCP" 00:15:45.193 }, 00:15:45.193 "peer_address": { 00:15:45.193 "adrfam": "IPv4", 00:15:45.193 "traddr": "10.0.0.1", 00:15:45.193 "trsvcid": "46614", 00:15:45.193 "trtype": "TCP" 00:15:45.193 }, 00:15:45.193 "qid": 0, 00:15:45.193 "state": "enabled", 00:15:45.193 "thread": "nvmf_tgt_poll_group_000" 00:15:45.193 } 00:15:45.193 ]' 00:15:45.193 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.193 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.193 13:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.193 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.193 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.193 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.193 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.193 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.452 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:45.452 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:46.020 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.282 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:46.282 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.282 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.282 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.282 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.282 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:46.282 13:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.549 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.825 00:15:46.825 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.825 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.825 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.084 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.084 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.084 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.084 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.084 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.084 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.084 { 00:15:47.084 "auth": { 00:15:47.084 "dhgroup": "ffdhe6144", 00:15:47.084 "digest": "sha256", 00:15:47.084 "state": "completed" 00:15:47.084 }, 00:15:47.084 "cntlid": 39, 00:15:47.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:47.084 "listen_address": { 00:15:47.084 "adrfam": "IPv4", 00:15:47.084 "traddr": "10.0.0.3", 00:15:47.084 "trsvcid": "4420", 00:15:47.084 "trtype": "TCP" 00:15:47.084 }, 00:15:47.084 "peer_address": { 00:15:47.084 "adrfam": "IPv4", 00:15:47.084 "traddr": "10.0.0.1", 00:15:47.084 "trsvcid": "46652", 00:15:47.084 "trtype": "TCP" 00:15:47.084 }, 00:15:47.084 "qid": 0, 00:15:47.084 "state": "enabled", 00:15:47.084 "thread": "nvmf_tgt_poll_group_000" 00:15:47.084 } 00:15:47.084 ]' 00:15:47.084 13:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.343 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.343 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.343 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:47.343 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.343 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.343 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.343 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.601 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:47.602 13:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.538 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.473 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.473 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.473 { 00:15:49.473 "auth": { 00:15:49.473 "dhgroup": "ffdhe8192", 00:15:49.473 "digest": "sha256", 00:15:49.473 "state": "completed" 00:15:49.473 }, 00:15:49.473 "cntlid": 41, 00:15:49.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:49.473 "listen_address": { 00:15:49.473 "adrfam": "IPv4", 00:15:49.473 "traddr": "10.0.0.3", 00:15:49.473 "trsvcid": "4420", 00:15:49.473 "trtype": "TCP" 00:15:49.473 }, 00:15:49.473 "peer_address": { 00:15:49.473 "adrfam": "IPv4", 00:15:49.473 "traddr": "10.0.0.1", 00:15:49.473 "trsvcid": "39348", 00:15:49.473 "trtype": "TCP" 00:15:49.473 }, 00:15:49.473 "qid": 0, 00:15:49.473 "state": "enabled", 00:15:49.473 "thread": "nvmf_tgt_poll_group_000" 00:15:49.473 } 00:15:49.473 ]' 00:15:49.474 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.731 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.731 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.731 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.731 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.731 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.731 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.731 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.990 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:49.990 13:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:50.556 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.556 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:50.556 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.556 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.556 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.556 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.556 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.556 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.813 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:50.813 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.814 13:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.395 00:15:51.395 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.395 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.395 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.652 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.652 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.652 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.652 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.653 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.653 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.653 { 00:15:51.653 "auth": { 00:15:51.653 "dhgroup": "ffdhe8192", 00:15:51.653 "digest": "sha256", 00:15:51.653 "state": "completed" 00:15:51.653 }, 00:15:51.653 "cntlid": 43, 00:15:51.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:51.653 "listen_address": { 00:15:51.653 "adrfam": "IPv4", 00:15:51.653 "traddr": "10.0.0.3", 00:15:51.653 "trsvcid": "4420", 00:15:51.653 "trtype": "TCP" 00:15:51.653 }, 00:15:51.653 "peer_address": { 00:15:51.653 "adrfam": "IPv4", 00:15:51.653 "traddr": "10.0.0.1", 00:15:51.653 "trsvcid": "39376", 00:15:51.653 "trtype": "TCP" 00:15:51.653 }, 00:15:51.653 "qid": 0, 00:15:51.653 "state": "enabled", 00:15:51.653 "thread": "nvmf_tgt_poll_group_000" 00:15:51.653 } 00:15:51.653 ]' 00:15:51.653 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.910 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.910 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.910 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.911 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.911 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.911 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.911 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.168 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:52.168 13:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:15:52.733 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.733 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:52.733 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.733 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.733 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.733 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.733 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.733 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.991 13:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.555 00:15:53.555 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.555 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.555 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.813 { 00:15:53.813 "auth": { 00:15:53.813 "dhgroup": "ffdhe8192", 00:15:53.813 "digest": "sha256", 00:15:53.813 "state": "completed" 00:15:53.813 }, 00:15:53.813 "cntlid": 45, 00:15:53.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:53.813 "listen_address": { 00:15:53.813 "adrfam": "IPv4", 00:15:53.813 "traddr": "10.0.0.3", 00:15:53.813 "trsvcid": "4420", 00:15:53.813 "trtype": "TCP" 00:15:53.813 }, 00:15:53.813 "peer_address": { 00:15:53.813 "adrfam": "IPv4", 00:15:53.813 "traddr": "10.0.0.1", 00:15:53.813 "trsvcid": "39406", 00:15:53.813 "trtype": "TCP" 00:15:53.813 }, 00:15:53.813 "qid": 0, 00:15:53.813 "state": "enabled", 00:15:53.813 "thread": "nvmf_tgt_poll_group_000" 00:15:53.813 } 00:15:53.813 ]' 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.813 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.070 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.070 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.070 13:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.332 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:54.332 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:15:54.901 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.901 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:54.901 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.901 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.901 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.901 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.901 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:54.901 13:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.467 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.033 00:15:56.033 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.033 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.033 13:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.292 { 00:15:56.292 "auth": { 00:15:56.292 "dhgroup": "ffdhe8192", 00:15:56.292 "digest": "sha256", 00:15:56.292 "state": "completed" 00:15:56.292 }, 00:15:56.292 "cntlid": 47, 00:15:56.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:56.292 "listen_address": { 00:15:56.292 "adrfam": "IPv4", 00:15:56.292 "traddr": "10.0.0.3", 00:15:56.292 "trsvcid": "4420", 00:15:56.292 "trtype": "TCP" 00:15:56.292 }, 00:15:56.292 "peer_address": { 00:15:56.292 "adrfam": "IPv4", 00:15:56.292 "traddr": "10.0.0.1", 00:15:56.292 "trsvcid": "39438", 00:15:56.292 "trtype": "TCP" 00:15:56.292 }, 00:15:56.292 "qid": 0, 00:15:56.292 "state": "enabled", 00:15:56.292 "thread": "nvmf_tgt_poll_group_000" 00:15:56.292 } 00:15:56.292 ]' 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.292 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.550 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:56.550 13:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:15:57.116 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.375 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.634 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.894 00:15:57.894 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.894 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.894 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.151 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.151 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.151 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.151 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.151 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.151 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.151 { 00:15:58.151 "auth": { 00:15:58.151 "dhgroup": "null", 00:15:58.151 "digest": "sha384", 00:15:58.151 "state": "completed" 00:15:58.151 }, 00:15:58.151 "cntlid": 49, 00:15:58.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:15:58.151 "listen_address": { 00:15:58.151 "adrfam": "IPv4", 00:15:58.151 "traddr": "10.0.0.3", 00:15:58.151 "trsvcid": "4420", 00:15:58.151 "trtype": "TCP" 00:15:58.151 }, 00:15:58.152 "peer_address": { 00:15:58.152 "adrfam": "IPv4", 00:15:58.152 "traddr": "10.0.0.1", 00:15:58.152 "trsvcid": "47260", 00:15:58.152 "trtype": "TCP" 00:15:58.152 }, 00:15:58.152 "qid": 0, 00:15:58.152 "state": "enabled", 00:15:58.152 "thread": "nvmf_tgt_poll_group_000" 00:15:58.152 } 00:15:58.152 ]' 00:15:58.152 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.152 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.152 13:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.152 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:58.152 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.152 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.152 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.152 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.717 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:58.717 13:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:15:59.284 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.284 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:15:59.284 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.284 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.284 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.284 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.284 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.284 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.542 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.800 00:15:59.800 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.800 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.800 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.057 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.058 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.058 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.058 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.058 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.058 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.058 { 00:16:00.058 "auth": { 00:16:00.058 "dhgroup": "null", 00:16:00.058 "digest": "sha384", 00:16:00.058 "state": "completed" 00:16:00.058 }, 00:16:00.058 "cntlid": 51, 00:16:00.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:00.058 "listen_address": { 00:16:00.058 "adrfam": "IPv4", 00:16:00.058 "traddr": "10.0.0.3", 00:16:00.058 "trsvcid": "4420", 00:16:00.058 "trtype": "TCP" 00:16:00.058 }, 00:16:00.058 "peer_address": { 00:16:00.058 "adrfam": "IPv4", 00:16:00.058 "traddr": "10.0.0.1", 00:16:00.058 "trsvcid": "47294", 00:16:00.058 "trtype": "TCP" 00:16:00.058 }, 00:16:00.058 "qid": 0, 00:16:00.058 "state": "enabled", 00:16:00.058 "thread": "nvmf_tgt_poll_group_000" 00:16:00.058 } 00:16:00.058 ]' 00:16:00.058 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.058 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.058 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:00.316 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.316 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.316 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.316 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.573 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:00.573 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:01.144 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.144 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:01.144 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.144 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.144 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.144 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.144 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.144 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.402 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.661 00:16:01.661 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.661 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.661 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.919 { 00:16:01.919 "auth": { 00:16:01.919 "dhgroup": "null", 00:16:01.919 "digest": "sha384", 00:16:01.919 "state": "completed" 00:16:01.919 }, 00:16:01.919 "cntlid": 53, 00:16:01.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:01.919 "listen_address": { 00:16:01.919 "adrfam": "IPv4", 00:16:01.919 "traddr": "10.0.0.3", 00:16:01.919 "trsvcid": "4420", 00:16:01.919 "trtype": "TCP" 00:16:01.919 }, 00:16:01.919 "peer_address": { 00:16:01.919 "adrfam": "IPv4", 00:16:01.919 "traddr": "10.0.0.1", 00:16:01.919 "trsvcid": "47320", 00:16:01.919 "trtype": "TCP" 00:16:01.919 }, 00:16:01.919 "qid": 0, 00:16:01.919 "state": "enabled", 00:16:01.919 "thread": "nvmf_tgt_poll_group_000" 00:16:01.919 } 00:16:01.919 ]' 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.919 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.178 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:02.178 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.178 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.178 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.178 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.435 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:02.436 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:03.005 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.005 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:03.005 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.005 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.005 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.005 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.005 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:03.005 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.263 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.522 00:16:03.522 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.522 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.522 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.782 { 00:16:03.782 "auth": { 00:16:03.782 "dhgroup": "null", 00:16:03.782 "digest": "sha384", 00:16:03.782 "state": "completed" 00:16:03.782 }, 00:16:03.782 "cntlid": 55, 00:16:03.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:03.782 "listen_address": { 00:16:03.782 "adrfam": "IPv4", 00:16:03.782 "traddr": "10.0.0.3", 00:16:03.782 "trsvcid": "4420", 00:16:03.782 "trtype": "TCP" 00:16:03.782 }, 00:16:03.782 "peer_address": { 00:16:03.782 "adrfam": "IPv4", 00:16:03.782 "traddr": "10.0.0.1", 00:16:03.782 "trsvcid": "47354", 00:16:03.782 "trtype": "TCP" 00:16:03.782 }, 00:16:03.782 "qid": 0, 00:16:03.782 "state": "enabled", 00:16:03.782 "thread": "nvmf_tgt_poll_group_000" 00:16:03.782 } 00:16:03.782 ]' 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.782 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.041 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.041 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.042 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.300 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:04.300 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.867 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.125 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:05.125 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.125 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.125 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.125 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.125 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.125 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.125 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.126 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.126 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.126 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.126 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.126 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.385 00:16:05.385 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.385 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.385 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.645 { 00:16:05.645 "auth": { 00:16:05.645 "dhgroup": "ffdhe2048", 00:16:05.645 "digest": "sha384", 00:16:05.645 "state": "completed" 00:16:05.645 }, 00:16:05.645 "cntlid": 57, 00:16:05.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:05.645 "listen_address": { 00:16:05.645 "adrfam": "IPv4", 00:16:05.645 "traddr": "10.0.0.3", 00:16:05.645 "trsvcid": "4420", 00:16:05.645 "trtype": "TCP" 00:16:05.645 }, 00:16:05.645 "peer_address": { 00:16:05.645 "adrfam": "IPv4", 00:16:05.645 "traddr": "10.0.0.1", 00:16:05.645 "trsvcid": "47386", 00:16:05.645 "trtype": "TCP" 00:16:05.645 }, 00:16:05.645 "qid": 0, 00:16:05.645 "state": "enabled", 00:16:05.645 "thread": "nvmf_tgt_poll_group_000" 00:16:05.645 } 00:16:05.645 ]' 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.645 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.904 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:05.904 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:06.474 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.474 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:06.474 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.474 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.474 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.474 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.474 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.474 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.733 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.991 00:16:07.284 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.284 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.284 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.284 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.284 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.284 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.284 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.284 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.284 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.284 { 00:16:07.284 "auth": { 00:16:07.284 "dhgroup": "ffdhe2048", 00:16:07.284 "digest": "sha384", 00:16:07.284 "state": "completed" 00:16:07.284 }, 00:16:07.284 "cntlid": 59, 00:16:07.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:07.284 "listen_address": { 00:16:07.284 "adrfam": "IPv4", 00:16:07.284 "traddr": "10.0.0.3", 00:16:07.284 "trsvcid": "4420", 00:16:07.284 "trtype": "TCP" 00:16:07.284 }, 00:16:07.284 "peer_address": { 00:16:07.284 "adrfam": "IPv4", 00:16:07.284 "traddr": "10.0.0.1", 00:16:07.284 "trsvcid": "46076", 00:16:07.284 "trtype": "TCP" 00:16:07.284 }, 00:16:07.284 "qid": 0, 00:16:07.284 "state": "enabled", 00:16:07.284 "thread": "nvmf_tgt_poll_group_000" 00:16:07.284 } 00:16:07.284 ]' 00:16:07.284 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.567 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.567 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.567 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.567 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.567 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.567 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.567 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.826 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:07.826 13:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:08.394 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.394 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:08.394 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.394 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.394 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.394 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.394 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.394 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.654 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.912 00:16:08.912 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.912 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.912 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.170 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.170 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.170 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.170 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.171 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.171 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.171 { 00:16:09.171 "auth": { 00:16:09.171 "dhgroup": "ffdhe2048", 00:16:09.171 "digest": "sha384", 00:16:09.171 "state": "completed" 00:16:09.171 }, 00:16:09.171 "cntlid": 61, 00:16:09.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:09.171 "listen_address": { 00:16:09.171 "adrfam": "IPv4", 00:16:09.171 "traddr": "10.0.0.3", 00:16:09.171 "trsvcid": "4420", 00:16:09.171 "trtype": "TCP" 00:16:09.171 }, 00:16:09.171 "peer_address": { 00:16:09.171 "adrfam": "IPv4", 00:16:09.171 "traddr": "10.0.0.1", 00:16:09.171 "trsvcid": "46100", 00:16:09.171 "trtype": "TCP" 00:16:09.171 }, 00:16:09.171 "qid": 0, 00:16:09.171 "state": "enabled", 00:16:09.171 "thread": "nvmf_tgt_poll_group_000" 00:16:09.171 } 00:16:09.171 ]' 00:16:09.171 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.171 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.171 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.171 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.171 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.430 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.430 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.430 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.430 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:09.430 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:09.998 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.257 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:10.257 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.257 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.257 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.257 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.257 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:10.257 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.516 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.774 00:16:10.774 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.774 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.774 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.034 { 00:16:11.034 "auth": { 00:16:11.034 "dhgroup": "ffdhe2048", 00:16:11.034 "digest": "sha384", 00:16:11.034 "state": "completed" 00:16:11.034 }, 00:16:11.034 "cntlid": 63, 00:16:11.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:11.034 "listen_address": { 00:16:11.034 "adrfam": "IPv4", 00:16:11.034 "traddr": "10.0.0.3", 00:16:11.034 "trsvcid": "4420", 00:16:11.034 "trtype": "TCP" 00:16:11.034 }, 00:16:11.034 "peer_address": { 00:16:11.034 "adrfam": "IPv4", 00:16:11.034 "traddr": "10.0.0.1", 00:16:11.034 "trsvcid": "46128", 00:16:11.034 "trtype": "TCP" 00:16:11.034 }, 00:16:11.034 "qid": 0, 00:16:11.034 "state": "enabled", 00:16:11.034 "thread": "nvmf_tgt_poll_group_000" 00:16:11.034 } 00:16:11.034 ]' 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.034 13:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.297 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:11.297 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.864 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.124 13:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.383 00:16:12.383 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.383 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.383 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.642 { 00:16:12.642 "auth": { 00:16:12.642 "dhgroup": "ffdhe3072", 00:16:12.642 "digest": "sha384", 00:16:12.642 "state": "completed" 00:16:12.642 }, 00:16:12.642 "cntlid": 65, 00:16:12.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:12.642 "listen_address": { 00:16:12.642 "adrfam": "IPv4", 00:16:12.642 "traddr": "10.0.0.3", 00:16:12.642 "trsvcid": "4420", 00:16:12.642 "trtype": "TCP" 00:16:12.642 }, 00:16:12.642 "peer_address": { 00:16:12.642 "adrfam": "IPv4", 00:16:12.642 "traddr": "10.0.0.1", 00:16:12.642 "trsvcid": "46154", 00:16:12.642 "trtype": "TCP" 00:16:12.642 }, 00:16:12.642 "qid": 0, 00:16:12.642 "state": "enabled", 00:16:12.642 "thread": "nvmf_tgt_poll_group_000" 00:16:12.642 } 00:16:12.642 ]' 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.642 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.901 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.901 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.901 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.901 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.901 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.160 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:13.160 13:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:13.728 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.728 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:13.728 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.728 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.728 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.728 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.728 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.728 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.987 13:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.246 00:16:14.246 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.246 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.246 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.504 { 00:16:14.504 "auth": { 00:16:14.504 "dhgroup": "ffdhe3072", 00:16:14.504 "digest": "sha384", 00:16:14.504 "state": "completed" 00:16:14.504 }, 00:16:14.504 "cntlid": 67, 00:16:14.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:14.504 "listen_address": { 00:16:14.504 "adrfam": "IPv4", 00:16:14.504 "traddr": "10.0.0.3", 00:16:14.504 "trsvcid": "4420", 00:16:14.504 "trtype": "TCP" 00:16:14.504 }, 00:16:14.504 "peer_address": { 00:16:14.504 "adrfam": "IPv4", 00:16:14.504 "traddr": "10.0.0.1", 00:16:14.504 "trsvcid": "46184", 00:16:14.504 "trtype": "TCP" 00:16:14.504 }, 00:16:14.504 "qid": 0, 00:16:14.504 "state": "enabled", 00:16:14.504 "thread": "nvmf_tgt_poll_group_000" 00:16:14.504 } 00:16:14.504 ]' 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.504 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.763 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.763 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.763 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.763 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.763 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.022 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:15.022 13:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:15.590 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.590 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:15.590 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.590 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.590 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.590 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.590 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.590 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.849 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.850 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.109 00:16:16.109 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.109 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.109 13:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.368 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.368 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.368 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.368 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.368 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.368 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.368 { 00:16:16.369 "auth": { 00:16:16.369 "dhgroup": "ffdhe3072", 00:16:16.369 "digest": "sha384", 00:16:16.369 "state": "completed" 00:16:16.369 }, 00:16:16.369 "cntlid": 69, 00:16:16.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:16.369 "listen_address": { 00:16:16.369 "adrfam": "IPv4", 00:16:16.369 "traddr": "10.0.0.3", 00:16:16.369 "trsvcid": "4420", 00:16:16.369 "trtype": "TCP" 00:16:16.369 }, 00:16:16.369 "peer_address": { 00:16:16.369 "adrfam": "IPv4", 00:16:16.369 "traddr": "10.0.0.1", 00:16:16.369 "trsvcid": "46216", 00:16:16.369 "trtype": "TCP" 00:16:16.369 }, 00:16:16.369 "qid": 0, 00:16:16.369 "state": "enabled", 00:16:16.369 "thread": "nvmf_tgt_poll_group_000" 00:16:16.369 } 00:16:16.369 ]' 00:16:16.369 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.369 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.369 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.369 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.369 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.628 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.628 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.628 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.887 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:16.887 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:17.454 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.455 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:17.455 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.455 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.455 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.455 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.455 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:17.455 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.714 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.973 00:16:17.973 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.973 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.973 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.232 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.232 { 00:16:18.232 "auth": { 00:16:18.232 "dhgroup": "ffdhe3072", 00:16:18.232 "digest": "sha384", 00:16:18.232 "state": "completed" 00:16:18.232 }, 00:16:18.232 "cntlid": 71, 00:16:18.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:18.232 "listen_address": { 00:16:18.232 "adrfam": "IPv4", 00:16:18.232 "traddr": "10.0.0.3", 00:16:18.232 "trsvcid": "4420", 00:16:18.232 "trtype": "TCP" 00:16:18.232 }, 00:16:18.232 "peer_address": { 00:16:18.232 "adrfam": "IPv4", 00:16:18.232 "traddr": "10.0.0.1", 00:16:18.232 "trsvcid": "47442", 00:16:18.232 "trtype": "TCP" 00:16:18.232 }, 00:16:18.232 "qid": 0, 00:16:18.232 "state": "enabled", 00:16:18.232 "thread": "nvmf_tgt_poll_group_000" 00:16:18.232 } 00:16:18.232 ]' 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.232 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.492 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:18.492 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:19.060 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.060 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:19.060 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.060 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.320 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.320 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.320 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.320 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:19.320 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.320 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.579 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.579 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.579 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.579 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.838 00:16:19.838 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.838 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.838 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.097 { 00:16:20.097 "auth": { 00:16:20.097 "dhgroup": "ffdhe4096", 00:16:20.097 "digest": "sha384", 00:16:20.097 "state": "completed" 00:16:20.097 }, 00:16:20.097 "cntlid": 73, 00:16:20.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:20.097 "listen_address": { 00:16:20.097 "adrfam": "IPv4", 00:16:20.097 "traddr": "10.0.0.3", 00:16:20.097 "trsvcid": "4420", 00:16:20.097 "trtype": "TCP" 00:16:20.097 }, 00:16:20.097 "peer_address": { 00:16:20.097 "adrfam": "IPv4", 00:16:20.097 "traddr": "10.0.0.1", 00:16:20.097 "trsvcid": "47464", 00:16:20.097 "trtype": "TCP" 00:16:20.097 }, 00:16:20.097 "qid": 0, 00:16:20.097 "state": "enabled", 00:16:20.097 "thread": "nvmf_tgt_poll_group_000" 00:16:20.097 } 00:16:20.097 ]' 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:20.097 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.357 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.357 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.357 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.616 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:20.616 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:21.184 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.184 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:21.184 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.184 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.184 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.184 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.184 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.184 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.442 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.702 00:16:21.702 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.702 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.702 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.961 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.961 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.961 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.961 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.222 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.222 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.222 { 00:16:22.222 "auth": { 00:16:22.222 "dhgroup": "ffdhe4096", 00:16:22.222 "digest": "sha384", 00:16:22.222 "state": "completed" 00:16:22.222 }, 00:16:22.222 "cntlid": 75, 00:16:22.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:22.222 "listen_address": { 00:16:22.222 "adrfam": "IPv4", 00:16:22.222 "traddr": "10.0.0.3", 00:16:22.222 "trsvcid": "4420", 00:16:22.222 "trtype": "TCP" 00:16:22.222 }, 00:16:22.222 "peer_address": { 00:16:22.222 "adrfam": "IPv4", 00:16:22.222 "traddr": "10.0.0.1", 00:16:22.222 "trsvcid": "47498", 00:16:22.222 "trtype": "TCP" 00:16:22.222 }, 00:16:22.222 "qid": 0, 00:16:22.222 "state": "enabled", 00:16:22.222 "thread": "nvmf_tgt_poll_group_000" 00:16:22.222 } 00:16:22.222 ]' 00:16:22.222 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.222 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.222 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.222 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.222 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.222 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.222 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.222 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.482 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:22.482 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:23.083 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.083 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:23.083 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.083 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.083 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.083 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.083 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:23.083 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.343 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.602 00:16:23.602 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.602 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.602 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.861 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.861 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.861 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.861 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.861 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.861 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.861 { 00:16:23.861 "auth": { 00:16:23.861 "dhgroup": "ffdhe4096", 00:16:23.861 "digest": "sha384", 00:16:23.861 "state": "completed" 00:16:23.861 }, 00:16:23.861 "cntlid": 77, 00:16:23.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:23.861 "listen_address": { 00:16:23.861 "adrfam": "IPv4", 00:16:23.861 "traddr": "10.0.0.3", 00:16:23.861 "trsvcid": "4420", 00:16:23.861 "trtype": "TCP" 00:16:23.861 }, 00:16:23.861 "peer_address": { 00:16:23.861 "adrfam": "IPv4", 00:16:23.861 "traddr": "10.0.0.1", 00:16:23.861 "trsvcid": "47516", 00:16:23.861 "trtype": "TCP" 00:16:23.861 }, 00:16:23.861 "qid": 0, 00:16:23.861 "state": "enabled", 00:16:23.861 "thread": "nvmf_tgt_poll_group_000" 00:16:23.861 } 00:16:23.861 ]' 00:16:23.861 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.120 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.120 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.120 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.120 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.120 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.120 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.120 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.379 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:24.379 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:24.948 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.948 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:24.948 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.948 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.948 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.948 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.948 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:24.948 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.208 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.468 00:16:25.727 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.727 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.727 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.727 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.727 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.727 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.727 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.986 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.986 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.986 { 00:16:25.986 "auth": { 00:16:25.986 "dhgroup": "ffdhe4096", 00:16:25.986 "digest": "sha384", 00:16:25.986 "state": "completed" 00:16:25.986 }, 00:16:25.986 "cntlid": 79, 00:16:25.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:25.987 "listen_address": { 00:16:25.987 "adrfam": "IPv4", 00:16:25.987 "traddr": "10.0.0.3", 00:16:25.987 "trsvcid": "4420", 00:16:25.987 "trtype": "TCP" 00:16:25.987 }, 00:16:25.987 "peer_address": { 00:16:25.987 "adrfam": "IPv4", 00:16:25.987 "traddr": "10.0.0.1", 00:16:25.987 "trsvcid": "47534", 00:16:25.987 "trtype": "TCP" 00:16:25.987 }, 00:16:25.987 "qid": 0, 00:16:25.987 "state": "enabled", 00:16:25.987 "thread": "nvmf_tgt_poll_group_000" 00:16:25.987 } 00:16:25.987 ]' 00:16:25.987 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.987 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.987 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.987 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.987 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.987 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.987 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.987 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.246 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:26.246 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:26.814 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.814 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:26.814 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.815 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.815 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.815 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.815 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.815 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:26.815 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.074 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.643 00:16:27.643 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.643 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.643 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.901 { 00:16:27.901 "auth": { 00:16:27.901 "dhgroup": "ffdhe6144", 00:16:27.901 "digest": "sha384", 00:16:27.901 "state": "completed" 00:16:27.901 }, 00:16:27.901 "cntlid": 81, 00:16:27.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:27.901 "listen_address": { 00:16:27.901 "adrfam": "IPv4", 00:16:27.901 "traddr": "10.0.0.3", 00:16:27.901 "trsvcid": "4420", 00:16:27.901 "trtype": "TCP" 00:16:27.901 }, 00:16:27.901 "peer_address": { 00:16:27.901 "adrfam": "IPv4", 00:16:27.901 "traddr": "10.0.0.1", 00:16:27.901 "trsvcid": "38300", 00:16:27.901 "trtype": "TCP" 00:16:27.901 }, 00:16:27.901 "qid": 0, 00:16:27.901 "state": "enabled", 00:16:27.901 "thread": "nvmf_tgt_poll_group_000" 00:16:27.901 } 00:16:27.901 ]' 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.901 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.160 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:28.160 13:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:28.730 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.730 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:28.730 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.730 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.730 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.730 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.730 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:28.730 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.988 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.989 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.989 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.989 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.557 00:16:29.557 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.557 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.557 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.816 { 00:16:29.816 "auth": { 00:16:29.816 "dhgroup": "ffdhe6144", 00:16:29.816 "digest": "sha384", 00:16:29.816 "state": "completed" 00:16:29.816 }, 00:16:29.816 "cntlid": 83, 00:16:29.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:29.816 "listen_address": { 00:16:29.816 "adrfam": "IPv4", 00:16:29.816 "traddr": "10.0.0.3", 00:16:29.816 "trsvcid": "4420", 00:16:29.816 "trtype": "TCP" 00:16:29.816 }, 00:16:29.816 "peer_address": { 00:16:29.816 "adrfam": "IPv4", 00:16:29.816 "traddr": "10.0.0.1", 00:16:29.816 "trsvcid": "38320", 00:16:29.816 "trtype": "TCP" 00:16:29.816 }, 00:16:29.816 "qid": 0, 00:16:29.816 "state": "enabled", 00:16:29.816 "thread": "nvmf_tgt_poll_group_000" 00:16:29.816 } 00:16:29.816 ]' 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.816 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.076 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.076 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.076 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.335 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:30.335 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:30.904 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.904 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:30.904 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.904 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.904 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.904 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.904 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:30.904 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:31.163 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:31.163 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.163 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.164 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.744 00:16:31.744 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.744 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.744 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.003 { 00:16:32.003 "auth": { 00:16:32.003 "dhgroup": "ffdhe6144", 00:16:32.003 "digest": "sha384", 00:16:32.003 "state": "completed" 00:16:32.003 }, 00:16:32.003 "cntlid": 85, 00:16:32.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:32.003 "listen_address": { 00:16:32.003 "adrfam": "IPv4", 00:16:32.003 "traddr": "10.0.0.3", 00:16:32.003 "trsvcid": "4420", 00:16:32.003 "trtype": "TCP" 00:16:32.003 }, 00:16:32.003 "peer_address": { 00:16:32.003 "adrfam": "IPv4", 00:16:32.003 "traddr": "10.0.0.1", 00:16:32.003 "trsvcid": "38350", 00:16:32.003 "trtype": "TCP" 00:16:32.003 }, 00:16:32.003 "qid": 0, 00:16:32.003 "state": "enabled", 00:16:32.003 "thread": "nvmf_tgt_poll_group_000" 00:16:32.003 } 00:16:32.003 ]' 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.003 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.262 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:32.262 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:32.831 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.831 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:32.831 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.831 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.831 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.831 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.831 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:32.831 13:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.399 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.656 00:16:33.656 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.656 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.656 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.914 { 00:16:33.914 "auth": { 00:16:33.914 "dhgroup": "ffdhe6144", 00:16:33.914 "digest": "sha384", 00:16:33.914 "state": "completed" 00:16:33.914 }, 00:16:33.914 "cntlid": 87, 00:16:33.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:33.914 "listen_address": { 00:16:33.914 "adrfam": "IPv4", 00:16:33.914 "traddr": "10.0.0.3", 00:16:33.914 "trsvcid": "4420", 00:16:33.914 "trtype": "TCP" 00:16:33.914 }, 00:16:33.914 "peer_address": { 00:16:33.914 "adrfam": "IPv4", 00:16:33.914 "traddr": "10.0.0.1", 00:16:33.914 "trsvcid": "38370", 00:16:33.914 "trtype": "TCP" 00:16:33.914 }, 00:16:33.914 "qid": 0, 00:16:33.914 "state": "enabled", 00:16:33.914 "thread": "nvmf_tgt_poll_group_000" 00:16:33.914 } 00:16:33.914 ]' 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.914 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.173 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.173 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.173 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.173 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.173 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.433 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:34.433 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.002 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.262 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.831 00:16:35.831 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.831 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.831 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.090 { 00:16:36.090 "auth": { 00:16:36.090 "dhgroup": "ffdhe8192", 00:16:36.090 "digest": "sha384", 00:16:36.090 "state": "completed" 00:16:36.090 }, 00:16:36.090 "cntlid": 89, 00:16:36.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:36.090 "listen_address": { 00:16:36.090 "adrfam": "IPv4", 00:16:36.090 "traddr": "10.0.0.3", 00:16:36.090 "trsvcid": "4420", 00:16:36.090 "trtype": "TCP" 00:16:36.090 }, 00:16:36.090 "peer_address": { 00:16:36.090 "adrfam": "IPv4", 00:16:36.090 "traddr": "10.0.0.1", 00:16:36.090 "trsvcid": "38386", 00:16:36.090 "trtype": "TCP" 00:16:36.090 }, 00:16:36.090 "qid": 0, 00:16:36.090 "state": "enabled", 00:16:36.090 "thread": "nvmf_tgt_poll_group_000" 00:16:36.090 } 00:16:36.090 ]' 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.090 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.350 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.350 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.350 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.350 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.350 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.625 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:36.625 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:37.204 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.204 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:37.204 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.204 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.204 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.204 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.204 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:37.204 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.462 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.030 00:16:38.030 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.030 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.030 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.288 { 00:16:38.288 "auth": { 00:16:38.288 "dhgroup": "ffdhe8192", 00:16:38.288 "digest": "sha384", 00:16:38.288 "state": "completed" 00:16:38.288 }, 00:16:38.288 "cntlid": 91, 00:16:38.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:38.288 "listen_address": { 00:16:38.288 "adrfam": "IPv4", 00:16:38.288 "traddr": "10.0.0.3", 00:16:38.288 "trsvcid": "4420", 00:16:38.288 "trtype": "TCP" 00:16:38.288 }, 00:16:38.288 "peer_address": { 00:16:38.288 "adrfam": "IPv4", 00:16:38.288 "traddr": "10.0.0.1", 00:16:38.288 "trsvcid": "51704", 00:16:38.288 "trtype": "TCP" 00:16:38.288 }, 00:16:38.288 "qid": 0, 00:16:38.288 "state": "enabled", 00:16:38.288 "thread": "nvmf_tgt_poll_group_000" 00:16:38.288 } 00:16:38.288 ]' 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.288 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.546 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.546 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.546 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.804 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:38.804 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:39.369 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.369 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:39.369 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.369 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.369 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.369 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.369 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:39.369 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.628 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.196 00:16:40.196 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.196 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.196 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.455 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.456 { 00:16:40.456 "auth": { 00:16:40.456 "dhgroup": "ffdhe8192", 00:16:40.456 "digest": "sha384", 00:16:40.456 "state": "completed" 00:16:40.456 }, 00:16:40.456 "cntlid": 93, 00:16:40.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:40.456 "listen_address": { 00:16:40.456 "adrfam": "IPv4", 00:16:40.456 "traddr": "10.0.0.3", 00:16:40.456 "trsvcid": "4420", 00:16:40.456 "trtype": "TCP" 00:16:40.456 }, 00:16:40.456 "peer_address": { 00:16:40.456 "adrfam": "IPv4", 00:16:40.456 "traddr": "10.0.0.1", 00:16:40.456 "trsvcid": "51738", 00:16:40.456 "trtype": "TCP" 00:16:40.456 }, 00:16:40.456 "qid": 0, 00:16:40.456 "state": "enabled", 00:16:40.456 "thread": "nvmf_tgt_poll_group_000" 00:16:40.456 } 00:16:40.456 ]' 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.456 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.715 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.715 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.715 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.715 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.974 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:40.974 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:41.554 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.554 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:41.554 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.554 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.554 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.554 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.554 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:41.554 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.828 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.395 00:16:42.395 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.395 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.395 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.654 { 00:16:42.654 "auth": { 00:16:42.654 "dhgroup": "ffdhe8192", 00:16:42.654 "digest": "sha384", 00:16:42.654 "state": "completed" 00:16:42.654 }, 00:16:42.654 "cntlid": 95, 00:16:42.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:42.654 "listen_address": { 00:16:42.654 "adrfam": "IPv4", 00:16:42.654 "traddr": "10.0.0.3", 00:16:42.654 "trsvcid": "4420", 00:16:42.654 "trtype": "TCP" 00:16:42.654 }, 00:16:42.654 "peer_address": { 00:16:42.654 "adrfam": "IPv4", 00:16:42.654 "traddr": "10.0.0.1", 00:16:42.654 "trsvcid": "51768", 00:16:42.654 "trtype": "TCP" 00:16:42.654 }, 00:16:42.654 "qid": 0, 00:16:42.654 "state": "enabled", 00:16:42.654 "thread": "nvmf_tgt_poll_group_000" 00:16:42.654 } 00:16:42.654 ]' 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.654 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.913 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.913 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.913 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.913 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.913 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.172 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:43.172 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.739 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.999 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.258 00:16:44.258 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.258 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.258 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.517 { 00:16:44.517 "auth": { 00:16:44.517 "dhgroup": "null", 00:16:44.517 "digest": "sha512", 00:16:44.517 "state": "completed" 00:16:44.517 }, 00:16:44.517 "cntlid": 97, 00:16:44.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:44.517 "listen_address": { 00:16:44.517 "adrfam": "IPv4", 00:16:44.517 "traddr": "10.0.0.3", 00:16:44.517 "trsvcid": "4420", 00:16:44.517 "trtype": "TCP" 00:16:44.517 }, 00:16:44.517 "peer_address": { 00:16:44.517 "adrfam": "IPv4", 00:16:44.517 "traddr": "10.0.0.1", 00:16:44.517 "trsvcid": "51782", 00:16:44.517 "trtype": "TCP" 00:16:44.517 }, 00:16:44.517 "qid": 0, 00:16:44.517 "state": "enabled", 00:16:44.517 "thread": "nvmf_tgt_poll_group_000" 00:16:44.517 } 00:16:44.517 ]' 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.517 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.776 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:44.776 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:45.345 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.345 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:45.345 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.345 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.345 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.345 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.345 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:45.345 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.604 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.879 00:16:45.879 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.879 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.879 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.148 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.148 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.148 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.148 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.148 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.148 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.148 { 00:16:46.148 "auth": { 00:16:46.148 "dhgroup": "null", 00:16:46.148 "digest": "sha512", 00:16:46.148 "state": "completed" 00:16:46.148 }, 00:16:46.148 "cntlid": 99, 00:16:46.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:46.148 "listen_address": { 00:16:46.148 "adrfam": "IPv4", 00:16:46.148 "traddr": "10.0.0.3", 00:16:46.148 "trsvcid": "4420", 00:16:46.148 "trtype": "TCP" 00:16:46.148 }, 00:16:46.148 "peer_address": { 00:16:46.148 "adrfam": "IPv4", 00:16:46.148 "traddr": "10.0.0.1", 00:16:46.148 "trsvcid": "51808", 00:16:46.148 "trtype": "TCP" 00:16:46.148 }, 00:16:46.148 "qid": 0, 00:16:46.148 "state": "enabled", 00:16:46.148 "thread": "nvmf_tgt_poll_group_000" 00:16:46.148 } 00:16:46.148 ]' 00:16:46.148 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.407 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.407 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.407 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:46.407 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.407 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.407 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.407 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.667 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:46.667 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:47.236 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.236 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:47.236 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.236 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.236 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.236 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.236 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:47.236 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.495 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.753 00:16:47.753 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.753 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.753 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.013 { 00:16:48.013 "auth": { 00:16:48.013 "dhgroup": "null", 00:16:48.013 "digest": "sha512", 00:16:48.013 "state": "completed" 00:16:48.013 }, 00:16:48.013 "cntlid": 101, 00:16:48.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:48.013 "listen_address": { 00:16:48.013 "adrfam": "IPv4", 00:16:48.013 "traddr": "10.0.0.3", 00:16:48.013 "trsvcid": "4420", 00:16:48.013 "trtype": "TCP" 00:16:48.013 }, 00:16:48.013 "peer_address": { 00:16:48.013 "adrfam": "IPv4", 00:16:48.013 "traddr": "10.0.0.1", 00:16:48.013 "trsvcid": "48954", 00:16:48.013 "trtype": "TCP" 00:16:48.013 }, 00:16:48.013 "qid": 0, 00:16:48.013 "state": "enabled", 00:16:48.013 "thread": "nvmf_tgt_poll_group_000" 00:16:48.013 } 00:16:48.013 ]' 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.013 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.581 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:48.581 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:49.149 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.149 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:49.149 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.149 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.149 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.149 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.149 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:49.150 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.150 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.409 00:16:49.668 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.668 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.668 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.668 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.668 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.668 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.668 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.929 { 00:16:49.929 "auth": { 00:16:49.929 "dhgroup": "null", 00:16:49.929 "digest": "sha512", 00:16:49.929 "state": "completed" 00:16:49.929 }, 00:16:49.929 "cntlid": 103, 00:16:49.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:49.929 "listen_address": { 00:16:49.929 "adrfam": "IPv4", 00:16:49.929 "traddr": "10.0.0.3", 00:16:49.929 "trsvcid": "4420", 00:16:49.929 "trtype": "TCP" 00:16:49.929 }, 00:16:49.929 "peer_address": { 00:16:49.929 "adrfam": "IPv4", 00:16:49.929 "traddr": "10.0.0.1", 00:16:49.929 "trsvcid": "48968", 00:16:49.929 "trtype": "TCP" 00:16:49.929 }, 00:16:49.929 "qid": 0, 00:16:49.929 "state": "enabled", 00:16:49.929 "thread": "nvmf_tgt_poll_group_000" 00:16:49.929 } 00:16:49.929 ]' 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.929 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.188 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:50.188 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:50.756 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.015 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.275 00:16:51.275 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.275 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.275 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.534 { 00:16:51.534 "auth": { 00:16:51.534 "dhgroup": "ffdhe2048", 00:16:51.534 "digest": "sha512", 00:16:51.534 "state": "completed" 00:16:51.534 }, 00:16:51.534 "cntlid": 105, 00:16:51.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:51.534 "listen_address": { 00:16:51.534 "adrfam": "IPv4", 00:16:51.534 "traddr": "10.0.0.3", 00:16:51.534 "trsvcid": "4420", 00:16:51.534 "trtype": "TCP" 00:16:51.534 }, 00:16:51.534 "peer_address": { 00:16:51.534 "adrfam": "IPv4", 00:16:51.534 "traddr": "10.0.0.1", 00:16:51.534 "trsvcid": "49008", 00:16:51.534 "trtype": "TCP" 00:16:51.534 }, 00:16:51.534 "qid": 0, 00:16:51.534 "state": "enabled", 00:16:51.534 "thread": "nvmf_tgt_poll_group_000" 00:16:51.534 } 00:16:51.534 ]' 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.534 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.793 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.793 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.793 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.793 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.793 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.052 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:52.052 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:52.620 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.620 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:52.620 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.620 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.620 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.620 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.620 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:52.620 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.880 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.139 00:16:53.139 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.139 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.139 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.398 { 00:16:53.398 "auth": { 00:16:53.398 "dhgroup": "ffdhe2048", 00:16:53.398 "digest": "sha512", 00:16:53.398 "state": "completed" 00:16:53.398 }, 00:16:53.398 "cntlid": 107, 00:16:53.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:53.398 "listen_address": { 00:16:53.398 "adrfam": "IPv4", 00:16:53.398 "traddr": "10.0.0.3", 00:16:53.398 "trsvcid": "4420", 00:16:53.398 "trtype": "TCP" 00:16:53.398 }, 00:16:53.398 "peer_address": { 00:16:53.398 "adrfam": "IPv4", 00:16:53.398 "traddr": "10.0.0.1", 00:16:53.398 "trsvcid": "49040", 00:16:53.398 "trtype": "TCP" 00:16:53.398 }, 00:16:53.398 "qid": 0, 00:16:53.398 "state": "enabled", 00:16:53.398 "thread": "nvmf_tgt_poll_group_000" 00:16:53.398 } 00:16:53.398 ]' 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.398 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.664 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:53.664 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:16:54.261 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.261 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:54.261 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.261 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.261 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.261 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.261 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.261 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.520 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.521 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.521 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.780 00:16:54.780 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.780 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.780 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.039 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.039 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.039 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.039 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.039 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.039 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.039 { 00:16:55.039 "auth": { 00:16:55.039 "dhgroup": "ffdhe2048", 00:16:55.039 "digest": "sha512", 00:16:55.039 "state": "completed" 00:16:55.039 }, 00:16:55.039 "cntlid": 109, 00:16:55.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:55.039 "listen_address": { 00:16:55.039 "adrfam": "IPv4", 00:16:55.039 "traddr": "10.0.0.3", 00:16:55.039 "trsvcid": "4420", 00:16:55.039 "trtype": "TCP" 00:16:55.039 }, 00:16:55.039 "peer_address": { 00:16:55.039 "adrfam": "IPv4", 00:16:55.039 "traddr": "10.0.0.1", 00:16:55.039 "trsvcid": "49078", 00:16:55.039 "trtype": "TCP" 00:16:55.039 }, 00:16:55.039 "qid": 0, 00:16:55.039 "state": "enabled", 00:16:55.039 "thread": "nvmf_tgt_poll_group_000" 00:16:55.039 } 00:16:55.039 ]' 00:16:55.039 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.298 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.298 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.298 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.298 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.298 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.298 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.298 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.557 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:55.557 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:16:56.126 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.126 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:56.126 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.126 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.126 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.126 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.126 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.126 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.384 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.642 00:16:56.642 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.642 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.642 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.902 { 00:16:56.902 "auth": { 00:16:56.902 "dhgroup": "ffdhe2048", 00:16:56.902 "digest": "sha512", 00:16:56.902 "state": "completed" 00:16:56.902 }, 00:16:56.902 "cntlid": 111, 00:16:56.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:56.902 "listen_address": { 00:16:56.902 "adrfam": "IPv4", 00:16:56.902 "traddr": "10.0.0.3", 00:16:56.902 "trsvcid": "4420", 00:16:56.902 "trtype": "TCP" 00:16:56.902 }, 00:16:56.902 "peer_address": { 00:16:56.902 "adrfam": "IPv4", 00:16:56.902 "traddr": "10.0.0.1", 00:16:56.902 "trsvcid": "49108", 00:16:56.902 "trtype": "TCP" 00:16:56.902 }, 00:16:56.902 "qid": 0, 00:16:56.902 "state": "enabled", 00:16:56.902 "thread": "nvmf_tgt_poll_group_000" 00:16:56.902 } 00:16:56.902 ]' 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.902 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.161 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.161 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.161 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.161 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.161 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.420 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:57.420 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:57.989 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.249 13:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.508 00:16:58.508 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.508 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.508 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.767 { 00:16:58.767 "auth": { 00:16:58.767 "dhgroup": "ffdhe3072", 00:16:58.767 "digest": "sha512", 00:16:58.767 "state": "completed" 00:16:58.767 }, 00:16:58.767 "cntlid": 113, 00:16:58.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:16:58.767 "listen_address": { 00:16:58.767 "adrfam": "IPv4", 00:16:58.767 "traddr": "10.0.0.3", 00:16:58.767 "trsvcid": "4420", 00:16:58.767 "trtype": "TCP" 00:16:58.767 }, 00:16:58.767 "peer_address": { 00:16:58.767 "adrfam": "IPv4", 00:16:58.767 "traddr": "10.0.0.1", 00:16:58.767 "trsvcid": "43568", 00:16:58.767 "trtype": "TCP" 00:16:58.767 }, 00:16:58.767 "qid": 0, 00:16:58.767 "state": "enabled", 00:16:58.767 "thread": "nvmf_tgt_poll_group_000" 00:16:58.767 } 00:16:58.767 ]' 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.767 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.025 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:59.025 13:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:16:59.593 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.593 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:16:59.593 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.593 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.593 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.593 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.593 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:59.593 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.852 13:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.111 00:17:00.370 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.370 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.370 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.628 { 00:17:00.628 "auth": { 00:17:00.628 "dhgroup": "ffdhe3072", 00:17:00.628 "digest": "sha512", 00:17:00.628 "state": "completed" 00:17:00.628 }, 00:17:00.628 "cntlid": 115, 00:17:00.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:00.628 "listen_address": { 00:17:00.628 "adrfam": "IPv4", 00:17:00.628 "traddr": "10.0.0.3", 00:17:00.628 "trsvcid": "4420", 00:17:00.628 "trtype": "TCP" 00:17:00.628 }, 00:17:00.628 "peer_address": { 00:17:00.628 "adrfam": "IPv4", 00:17:00.628 "traddr": "10.0.0.1", 00:17:00.628 "trsvcid": "43606", 00:17:00.628 "trtype": "TCP" 00:17:00.628 }, 00:17:00.628 "qid": 0, 00:17:00.628 "state": "enabled", 00:17:00.628 "thread": "nvmf_tgt_poll_group_000" 00:17:00.628 } 00:17:00.628 ]' 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.628 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.885 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:17:00.886 13:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:17:01.453 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.453 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:01.453 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.453 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.453 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.453 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.453 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.453 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.713 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.972 00:17:01.972 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.972 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.972 13:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.231 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.231 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.231 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.231 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.231 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.231 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.231 { 00:17:02.231 "auth": { 00:17:02.231 "dhgroup": "ffdhe3072", 00:17:02.231 "digest": "sha512", 00:17:02.231 "state": "completed" 00:17:02.231 }, 00:17:02.231 "cntlid": 117, 00:17:02.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:02.231 "listen_address": { 00:17:02.231 "adrfam": "IPv4", 00:17:02.231 "traddr": "10.0.0.3", 00:17:02.231 "trsvcid": "4420", 00:17:02.231 "trtype": "TCP" 00:17:02.231 }, 00:17:02.231 "peer_address": { 00:17:02.231 "adrfam": "IPv4", 00:17:02.231 "traddr": "10.0.0.1", 00:17:02.231 "trsvcid": "43648", 00:17:02.231 "trtype": "TCP" 00:17:02.231 }, 00:17:02.231 "qid": 0, 00:17:02.231 "state": "enabled", 00:17:02.231 "thread": "nvmf_tgt_poll_group_000" 00:17:02.231 } 00:17:02.231 ]' 00:17:02.231 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.231 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.232 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.528 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.528 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.528 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.528 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.528 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.786 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:17:02.786 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:17:03.354 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.354 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:03.354 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.354 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.354 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.354 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.354 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.354 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.614 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.873 00:17:03.873 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.873 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.873 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.133 { 00:17:04.133 "auth": { 00:17:04.133 "dhgroup": "ffdhe3072", 00:17:04.133 "digest": "sha512", 00:17:04.133 "state": "completed" 00:17:04.133 }, 00:17:04.133 "cntlid": 119, 00:17:04.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:04.133 "listen_address": { 00:17:04.133 "adrfam": "IPv4", 00:17:04.133 "traddr": "10.0.0.3", 00:17:04.133 "trsvcid": "4420", 00:17:04.133 "trtype": "TCP" 00:17:04.133 }, 00:17:04.133 "peer_address": { 00:17:04.133 "adrfam": "IPv4", 00:17:04.133 "traddr": "10.0.0.1", 00:17:04.133 "trsvcid": "43680", 00:17:04.133 "trtype": "TCP" 00:17:04.133 }, 00:17:04.133 "qid": 0, 00:17:04.133 "state": "enabled", 00:17:04.133 "thread": "nvmf_tgt_poll_group_000" 00:17:04.133 } 00:17:04.133 ]' 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.133 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.133 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.133 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.133 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.391 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:04.391 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:04.959 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.219 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.786 00:17:05.786 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.786 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.786 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.045 { 00:17:06.045 "auth": { 00:17:06.045 "dhgroup": "ffdhe4096", 00:17:06.045 "digest": "sha512", 00:17:06.045 "state": "completed" 00:17:06.045 }, 00:17:06.045 "cntlid": 121, 00:17:06.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:06.045 "listen_address": { 00:17:06.045 "adrfam": "IPv4", 00:17:06.045 "traddr": "10.0.0.3", 00:17:06.045 "trsvcid": "4420", 00:17:06.045 "trtype": "TCP" 00:17:06.045 }, 00:17:06.045 "peer_address": { 00:17:06.045 "adrfam": "IPv4", 00:17:06.045 "traddr": "10.0.0.1", 00:17:06.045 "trsvcid": "43704", 00:17:06.045 "trtype": "TCP" 00:17:06.045 }, 00:17:06.045 "qid": 0, 00:17:06.045 "state": "enabled", 00:17:06.045 "thread": "nvmf_tgt_poll_group_000" 00:17:06.045 } 00:17:06.045 ]' 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.045 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.304 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:17:06.304 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:17:06.876 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.876 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:06.876 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.876 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.876 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.876 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.877 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:06.877 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.136 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.704 00:17:07.704 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.704 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.704 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.704 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.704 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.704 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.704 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.962 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.962 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.962 { 00:17:07.962 "auth": { 00:17:07.962 "dhgroup": "ffdhe4096", 00:17:07.963 "digest": "sha512", 00:17:07.963 "state": "completed" 00:17:07.963 }, 00:17:07.963 "cntlid": 123, 00:17:07.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:07.963 "listen_address": { 00:17:07.963 "adrfam": "IPv4", 00:17:07.963 "traddr": "10.0.0.3", 00:17:07.963 "trsvcid": "4420", 00:17:07.963 "trtype": "TCP" 00:17:07.963 }, 00:17:07.963 "peer_address": { 00:17:07.963 "adrfam": "IPv4", 00:17:07.963 "traddr": "10.0.0.1", 00:17:07.963 "trsvcid": "51174", 00:17:07.963 "trtype": "TCP" 00:17:07.963 }, 00:17:07.963 "qid": 0, 00:17:07.963 "state": "enabled", 00:17:07.963 "thread": "nvmf_tgt_poll_group_000" 00:17:07.963 } 00:17:07.963 ]' 00:17:07.963 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.963 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.963 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.963 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.963 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.963 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.963 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.963 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.221 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:17:08.221 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:17:08.789 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.789 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:08.789 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.789 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.789 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.789 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.789 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.789 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.049 13:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.615 00:17:09.616 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.616 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.616 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.875 { 00:17:09.875 "auth": { 00:17:09.875 "dhgroup": "ffdhe4096", 00:17:09.875 "digest": "sha512", 00:17:09.875 "state": "completed" 00:17:09.875 }, 00:17:09.875 "cntlid": 125, 00:17:09.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:09.875 "listen_address": { 00:17:09.875 "adrfam": "IPv4", 00:17:09.875 "traddr": "10.0.0.3", 00:17:09.875 "trsvcid": "4420", 00:17:09.875 "trtype": "TCP" 00:17:09.875 }, 00:17:09.875 "peer_address": { 00:17:09.875 "adrfam": "IPv4", 00:17:09.875 "traddr": "10.0.0.1", 00:17:09.875 "trsvcid": "51198", 00:17:09.875 "trtype": "TCP" 00:17:09.875 }, 00:17:09.875 "qid": 0, 00:17:09.875 "state": "enabled", 00:17:09.875 "thread": "nvmf_tgt_poll_group_000" 00:17:09.875 } 00:17:09.875 ]' 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.875 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.134 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:17:10.135 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:17:10.701 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.701 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:10.701 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.701 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.701 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.701 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.701 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.701 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.012 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.013 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.013 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.013 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.013 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.271 00:17:11.271 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.271 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.271 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.530 { 00:17:11.530 "auth": { 00:17:11.530 "dhgroup": "ffdhe4096", 00:17:11.530 "digest": "sha512", 00:17:11.530 "state": "completed" 00:17:11.530 }, 00:17:11.530 "cntlid": 127, 00:17:11.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:11.530 "listen_address": { 00:17:11.530 "adrfam": "IPv4", 00:17:11.530 "traddr": "10.0.0.3", 00:17:11.530 "trsvcid": "4420", 00:17:11.530 "trtype": "TCP" 00:17:11.530 }, 00:17:11.530 "peer_address": { 00:17:11.530 "adrfam": "IPv4", 00:17:11.530 "traddr": "10.0.0.1", 00:17:11.530 "trsvcid": "51224", 00:17:11.530 "trtype": "TCP" 00:17:11.530 }, 00:17:11.530 "qid": 0, 00:17:11.530 "state": "enabled", 00:17:11.530 "thread": "nvmf_tgt_poll_group_000" 00:17:11.530 } 00:17:11.530 ]' 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.530 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.789 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.789 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.789 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.789 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.789 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.048 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:12.048 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.617 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.876 13:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.443 00:17:13.444 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.444 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.444 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.701 { 00:17:13.701 "auth": { 00:17:13.701 "dhgroup": "ffdhe6144", 00:17:13.701 "digest": "sha512", 00:17:13.701 "state": "completed" 00:17:13.701 }, 00:17:13.701 "cntlid": 129, 00:17:13.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:13.701 "listen_address": { 00:17:13.701 "adrfam": "IPv4", 00:17:13.701 "traddr": "10.0.0.3", 00:17:13.701 "trsvcid": "4420", 00:17:13.701 "trtype": "TCP" 00:17:13.701 }, 00:17:13.701 "peer_address": { 00:17:13.701 "adrfam": "IPv4", 00:17:13.701 "traddr": "10.0.0.1", 00:17:13.701 "trsvcid": "51254", 00:17:13.701 "trtype": "TCP" 00:17:13.701 }, 00:17:13.701 "qid": 0, 00:17:13.701 "state": "enabled", 00:17:13.701 "thread": "nvmf_tgt_poll_group_000" 00:17:13.701 } 00:17:13.701 ]' 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.701 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.960 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:17:13.960 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:17:14.527 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.527 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:14.527 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.527 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.527 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.527 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.527 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.527 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.786 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.353 00:17:15.353 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.353 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.353 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.611 { 00:17:15.611 "auth": { 00:17:15.611 "dhgroup": "ffdhe6144", 00:17:15.611 "digest": "sha512", 00:17:15.611 "state": "completed" 00:17:15.611 }, 00:17:15.611 "cntlid": 131, 00:17:15.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:15.611 "listen_address": { 00:17:15.611 "adrfam": "IPv4", 00:17:15.611 "traddr": "10.0.0.3", 00:17:15.611 "trsvcid": "4420", 00:17:15.611 "trtype": "TCP" 00:17:15.611 }, 00:17:15.611 "peer_address": { 00:17:15.611 "adrfam": "IPv4", 00:17:15.611 "traddr": "10.0.0.1", 00:17:15.611 "trsvcid": "51270", 00:17:15.611 "trtype": "TCP" 00:17:15.611 }, 00:17:15.611 "qid": 0, 00:17:15.611 "state": "enabled", 00:17:15.611 "thread": "nvmf_tgt_poll_group_000" 00:17:15.611 } 00:17:15.611 ]' 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.611 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.872 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:17:15.873 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:17:16.441 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.441 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:16.441 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.441 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.441 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.441 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.441 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.701 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.269 00:17:17.269 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.269 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.269 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.530 { 00:17:17.530 "auth": { 00:17:17.530 "dhgroup": "ffdhe6144", 00:17:17.530 "digest": "sha512", 00:17:17.530 "state": "completed" 00:17:17.530 }, 00:17:17.530 "cntlid": 133, 00:17:17.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:17.530 "listen_address": { 00:17:17.530 "adrfam": "IPv4", 00:17:17.530 "traddr": "10.0.0.3", 00:17:17.530 "trsvcid": "4420", 00:17:17.530 "trtype": "TCP" 00:17:17.530 }, 00:17:17.530 "peer_address": { 00:17:17.530 "adrfam": "IPv4", 00:17:17.530 "traddr": "10.0.0.1", 00:17:17.530 "trsvcid": "60118", 00:17:17.530 "trtype": "TCP" 00:17:17.530 }, 00:17:17.530 "qid": 0, 00:17:17.530 "state": "enabled", 00:17:17.530 "thread": "nvmf_tgt_poll_group_000" 00:17:17.530 } 00:17:17.530 ]' 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.530 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.790 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:17:17.790 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:17:18.365 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.365 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:18.365 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.365 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.365 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.365 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.365 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:18.365 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.624 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.190 00:17:19.190 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.190 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.190 13:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.449 { 00:17:19.449 "auth": { 00:17:19.449 "dhgroup": "ffdhe6144", 00:17:19.449 "digest": "sha512", 00:17:19.449 "state": "completed" 00:17:19.449 }, 00:17:19.449 "cntlid": 135, 00:17:19.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:19.449 "listen_address": { 00:17:19.449 "adrfam": "IPv4", 00:17:19.449 "traddr": "10.0.0.3", 00:17:19.449 "trsvcid": "4420", 00:17:19.449 "trtype": "TCP" 00:17:19.449 }, 00:17:19.449 "peer_address": { 00:17:19.449 "adrfam": "IPv4", 00:17:19.449 "traddr": "10.0.0.1", 00:17:19.449 "trsvcid": "60138", 00:17:19.449 "trtype": "TCP" 00:17:19.449 }, 00:17:19.449 "qid": 0, 00:17:19.449 "state": "enabled", 00:17:19.449 "thread": "nvmf_tgt_poll_group_000" 00:17:19.449 } 00:17:19.449 ]' 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.449 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.708 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:19.708 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.277 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.537 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.106 00:17:21.106 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.106 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.106 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.366 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.366 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.366 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.366 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.366 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.366 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.366 { 00:17:21.366 "auth": { 00:17:21.366 "dhgroup": "ffdhe8192", 00:17:21.366 "digest": "sha512", 00:17:21.366 "state": "completed" 00:17:21.366 }, 00:17:21.366 "cntlid": 137, 00:17:21.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:21.366 "listen_address": { 00:17:21.366 "adrfam": "IPv4", 00:17:21.366 "traddr": "10.0.0.3", 00:17:21.366 "trsvcid": "4420", 00:17:21.366 "trtype": "TCP" 00:17:21.366 }, 00:17:21.366 "peer_address": { 00:17:21.366 "adrfam": "IPv4", 00:17:21.366 "traddr": "10.0.0.1", 00:17:21.366 "trsvcid": "60164", 00:17:21.366 "trtype": "TCP" 00:17:21.366 }, 00:17:21.366 "qid": 0, 00:17:21.366 "state": "enabled", 00:17:21.366 "thread": "nvmf_tgt_poll_group_000" 00:17:21.366 } 00:17:21.366 ]' 00:17:21.366 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.626 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.626 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.626 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.626 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.626 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.626 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.626 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.886 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:17:21.886 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:17:22.475 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.475 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:22.475 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.475 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.475 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.475 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.475 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.475 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.734 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.302 00:17:23.302 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.302 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.302 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.302 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.302 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.302 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.302 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.303 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.303 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.303 { 00:17:23.303 "auth": { 00:17:23.303 "dhgroup": "ffdhe8192", 00:17:23.303 "digest": "sha512", 00:17:23.303 "state": "completed" 00:17:23.303 }, 00:17:23.303 "cntlid": 139, 00:17:23.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:23.303 "listen_address": { 00:17:23.303 "adrfam": "IPv4", 00:17:23.303 "traddr": "10.0.0.3", 00:17:23.303 "trsvcid": "4420", 00:17:23.303 "trtype": "TCP" 00:17:23.303 }, 00:17:23.303 "peer_address": { 00:17:23.303 "adrfam": "IPv4", 00:17:23.303 "traddr": "10.0.0.1", 00:17:23.303 "trsvcid": "60190", 00:17:23.303 "trtype": "TCP" 00:17:23.303 }, 00:17:23.303 "qid": 0, 00:17:23.303 "state": "enabled", 00:17:23.303 "thread": "nvmf_tgt_poll_group_000" 00:17:23.303 } 00:17:23.303 ]' 00:17:23.303 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.561 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.561 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.561 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.561 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.561 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.561 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.562 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.821 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:17:23.821 13:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: --dhchap-ctrl-secret DHHC-1:02:MjA3MGU3N2E4NzU5ZmExMjBmY2RjZDRlYjg3Y2RkMjY4YzM5ZjExNGNhMjNkMTIwy6VoNw==: 00:17:24.389 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.389 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:24.389 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.389 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.389 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.389 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.389 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.389 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.648 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.218 00:17:25.218 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.218 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.218 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.477 { 00:17:25.477 "auth": { 00:17:25.477 "dhgroup": "ffdhe8192", 00:17:25.477 "digest": "sha512", 00:17:25.477 "state": "completed" 00:17:25.477 }, 00:17:25.477 "cntlid": 141, 00:17:25.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:25.477 "listen_address": { 00:17:25.477 "adrfam": "IPv4", 00:17:25.477 "traddr": "10.0.0.3", 00:17:25.477 "trsvcid": "4420", 00:17:25.477 "trtype": "TCP" 00:17:25.477 }, 00:17:25.477 "peer_address": { 00:17:25.477 "adrfam": "IPv4", 00:17:25.477 "traddr": "10.0.0.1", 00:17:25.477 "trsvcid": "60228", 00:17:25.477 "trtype": "TCP" 00:17:25.477 }, 00:17:25.477 "qid": 0, 00:17:25.477 "state": "enabled", 00:17:25.477 "thread": "nvmf_tgt_poll_group_000" 00:17:25.477 } 00:17:25.477 ]' 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.477 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.478 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.478 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.737 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:17:25.737 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:01:NWU0NzA2YTk1OTkxNDg2ZjZhOTJlZDE1ZTU1NDY4MzQCbcYj: 00:17:26.305 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.306 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:26.306 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.306 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.306 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.306 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.306 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:26.306 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.565 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.134 00:17:27.134 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.134 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.134 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.397 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.397 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.397 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.397 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.397 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.397 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.397 { 00:17:27.397 "auth": { 00:17:27.397 "dhgroup": "ffdhe8192", 00:17:27.397 "digest": "sha512", 00:17:27.397 "state": "completed" 00:17:27.397 }, 00:17:27.397 "cntlid": 143, 00:17:27.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:27.397 "listen_address": { 00:17:27.397 "adrfam": "IPv4", 00:17:27.397 "traddr": "10.0.0.3", 00:17:27.397 "trsvcid": "4420", 00:17:27.397 "trtype": "TCP" 00:17:27.397 }, 00:17:27.397 "peer_address": { 00:17:27.397 "adrfam": "IPv4", 00:17:27.397 "traddr": "10.0.0.1", 00:17:27.397 "trsvcid": "60250", 00:17:27.397 "trtype": "TCP" 00:17:27.397 }, 00:17:27.397 "qid": 0, 00:17:27.397 "state": "enabled", 00:17:27.397 "thread": "nvmf_tgt_poll_group_000" 00:17:27.397 } 00:17:27.397 ]' 00:17:27.397 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.658 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.658 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.658 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.658 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.658 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.658 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.658 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.924 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:27.924 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.497 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.757 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.324 00:17:29.324 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.324 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.324 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.584 { 00:17:29.584 "auth": { 00:17:29.584 "dhgroup": "ffdhe8192", 00:17:29.584 "digest": "sha512", 00:17:29.584 "state": "completed" 00:17:29.584 }, 00:17:29.584 "cntlid": 145, 00:17:29.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:29.584 "listen_address": { 00:17:29.584 "adrfam": "IPv4", 00:17:29.584 "traddr": "10.0.0.3", 00:17:29.584 "trsvcid": "4420", 00:17:29.584 "trtype": "TCP" 00:17:29.584 }, 00:17:29.584 "peer_address": { 00:17:29.584 "adrfam": "IPv4", 00:17:29.584 "traddr": "10.0.0.1", 00:17:29.584 "trsvcid": "58262", 00:17:29.584 "trtype": "TCP" 00:17:29.584 }, 00:17:29.584 "qid": 0, 00:17:29.584 "state": "enabled", 00:17:29.584 "thread": "nvmf_tgt_poll_group_000" 00:17:29.584 } 00:17:29.584 ]' 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.584 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.843 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:17:29.843 13:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:00:MzI5MTI2YzEwY2MwYzBiMmY3NjZjNTgzOGFhZWI4YjEyMDJhNGIzZTAwZDgzMGM0NKl/sQ==: --dhchap-ctrl-secret DHHC-1:03:ZTVkZTUwOWNjMzM5ODdmYTg0NjNhMWEyMWQ1NmJhNjgzYTI5ZjBlOThlNWJiYTU1ZGM0YjQzYTBlNDEzYTk3Ma5ImkM=: 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:30.411 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:30.412 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.412 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:30.412 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.412 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:30.412 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:30.412 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:30.980 2024/11/06 13:47:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:30.981 request: 00:17:30.981 { 00:17:30.981 "method": "bdev_nvme_attach_controller", 00:17:30.981 "params": { 00:17:30.981 "name": "nvme0", 00:17:30.981 "trtype": "tcp", 00:17:30.981 "traddr": "10.0.0.3", 00:17:30.981 "adrfam": "ipv4", 00:17:30.981 "trsvcid": "4420", 00:17:30.981 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:30.981 "prchk_reftag": false, 00:17:30.981 "prchk_guard": false, 00:17:30.981 "hdgst": false, 00:17:30.981 "ddgst": false, 00:17:30.981 "dhchap_key": "key2", 00:17:30.981 "allow_unrecognized_csi": false 00:17:30.981 } 00:17:30.981 } 00:17:30.981 Got JSON-RPC error response 00:17:30.981 GoRPCClient: error on JSON-RPC call 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:30.981 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.550 2024/11/06 13:47:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:31.550 request: 00:17:31.550 { 00:17:31.550 "method": "bdev_nvme_attach_controller", 00:17:31.550 "params": { 00:17:31.550 "name": "nvme0", 00:17:31.550 "trtype": "tcp", 00:17:31.550 "traddr": "10.0.0.3", 00:17:31.550 "adrfam": "ipv4", 00:17:31.550 "trsvcid": "4420", 00:17:31.550 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:31.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:31.550 "prchk_reftag": false, 00:17:31.550 "prchk_guard": false, 00:17:31.550 "hdgst": false, 00:17:31.550 "ddgst": false, 00:17:31.550 "dhchap_key": "key1", 00:17:31.550 "dhchap_ctrlr_key": "ckey2", 00:17:31.550 "allow_unrecognized_csi": false 00:17:31.550 } 00:17:31.550 } 00:17:31.550 Got JSON-RPC error response 00:17:31.550 GoRPCClient: error on JSON-RPC call 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.550 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.118 2024/11/06 13:47:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:32.118 request: 00:17:32.118 { 00:17:32.118 "method": "bdev_nvme_attach_controller", 00:17:32.118 "params": { 00:17:32.118 "name": "nvme0", 00:17:32.118 "trtype": "tcp", 00:17:32.118 "traddr": "10.0.0.3", 00:17:32.118 "adrfam": "ipv4", 00:17:32.118 "trsvcid": "4420", 00:17:32.118 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:32.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:32.118 "prchk_reftag": false, 00:17:32.118 "prchk_guard": false, 00:17:32.118 "hdgst": false, 00:17:32.118 "ddgst": false, 00:17:32.118 "dhchap_key": "key1", 00:17:32.118 "dhchap_ctrlr_key": "ckey1", 00:17:32.118 "allow_unrecognized_csi": false 00:17:32.118 } 00:17:32.118 } 00:17:32.118 Got JSON-RPC error response 00:17:32.118 GoRPCClient: error on JSON-RPC call 00:17:32.118 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:32.118 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.118 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.118 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.119 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76972 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 76972 ']' 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 76972 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:32.119 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76972 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76972' 00:17:32.377 killing process with pid 76972 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 76972 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 76972 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81695 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81695 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 81695 ']' 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:32.377 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.313 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:33.313 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:33.313 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.313 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.313 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81695 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 81695 ']' 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.572 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 null0 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.j0i 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.zdK ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zdK 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Gp2 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.52W ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.52W 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gx1 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.GjI ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GjI 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.3fS 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.832 13:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.770 nvme0n1 00:17:34.770 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.770 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.770 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.030 { 00:17:35.030 "auth": { 00:17:35.030 "dhgroup": "ffdhe8192", 00:17:35.030 "digest": "sha512", 00:17:35.030 "state": "completed" 00:17:35.030 }, 00:17:35.030 "cntlid": 1, 00:17:35.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:35.030 "listen_address": { 00:17:35.030 "adrfam": "IPv4", 00:17:35.030 "traddr": "10.0.0.3", 00:17:35.030 "trsvcid": "4420", 00:17:35.030 "trtype": "TCP" 00:17:35.030 }, 00:17:35.030 "peer_address": { 00:17:35.030 "adrfam": "IPv4", 00:17:35.030 "traddr": "10.0.0.1", 00:17:35.030 "trsvcid": "58312", 00:17:35.030 "trtype": "TCP" 00:17:35.030 }, 00:17:35.030 "qid": 0, 00:17:35.030 "state": "enabled", 00:17:35.030 "thread": "nvmf_tgt_poll_group_000" 00:17:35.030 } 00:17:35.030 ]' 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.030 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.294 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.294 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.294 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.294 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.294 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.582 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:35.582 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key3 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:36.151 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.411 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.672 2024/11/06 13:47:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:36.672 request: 00:17:36.672 { 00:17:36.672 "method": "bdev_nvme_attach_controller", 00:17:36.672 "params": { 00:17:36.672 "name": "nvme0", 00:17:36.672 "trtype": "tcp", 00:17:36.672 "traddr": "10.0.0.3", 00:17:36.672 "adrfam": "ipv4", 00:17:36.672 "trsvcid": "4420", 00:17:36.672 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:36.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:36.672 "prchk_reftag": false, 00:17:36.672 "prchk_guard": false, 00:17:36.672 "hdgst": false, 00:17:36.672 "ddgst": false, 00:17:36.672 "dhchap_key": "key3", 00:17:36.672 "allow_unrecognized_csi": false 00:17:36.672 } 00:17:36.672 } 00:17:36.672 Got JSON-RPC error response 00:17:36.672 GoRPCClient: error on JSON-RPC call 00:17:36.672 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:36.672 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:36.672 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:36.672 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:36.672 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:36.672 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:36.672 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:36.672 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.932 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.191 2024/11/06 13:47:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:37.191 request: 00:17:37.191 { 00:17:37.191 "method": "bdev_nvme_attach_controller", 00:17:37.191 "params": { 00:17:37.191 "name": "nvme0", 00:17:37.191 "trtype": "tcp", 00:17:37.191 "traddr": "10.0.0.3", 00:17:37.191 "adrfam": "ipv4", 00:17:37.191 "trsvcid": "4420", 00:17:37.191 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:37.191 "prchk_reftag": false, 00:17:37.191 "prchk_guard": false, 00:17:37.191 "hdgst": false, 00:17:37.191 "ddgst": false, 00:17:37.191 "dhchap_key": "key3", 00:17:37.191 "allow_unrecognized_csi": false 00:17:37.191 } 00:17:37.191 } 00:17:37.191 Got JSON-RPC error response 00:17:37.191 GoRPCClient: error on JSON-RPC call 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:37.191 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.451 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.711 2024/11/06 13:47:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:37.711 request: 00:17:37.711 { 00:17:37.711 "method": "bdev_nvme_attach_controller", 00:17:37.711 "params": { 00:17:37.711 "name": "nvme0", 00:17:37.711 "trtype": "tcp", 00:17:37.711 "traddr": "10.0.0.3", 00:17:37.711 "adrfam": "ipv4", 00:17:37.711 "trsvcid": "4420", 00:17:37.711 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:37.711 "prchk_reftag": false, 00:17:37.711 "prchk_guard": false, 00:17:37.711 "hdgst": false, 00:17:37.711 "ddgst": false, 00:17:37.711 "dhchap_key": "key0", 00:17:37.711 "dhchap_ctrlr_key": "key1", 00:17:37.711 "allow_unrecognized_csi": false 00:17:37.711 } 00:17:37.711 } 00:17:37.711 Got JSON-RPC error response 00:17:37.711 GoRPCClient: error on JSON-RPC call 00:17:37.711 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:37.711 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.711 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.711 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.711 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:37.711 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:37.711 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:37.969 nvme0n1 00:17:37.969 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:37.969 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.969 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:38.227 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.227 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.227 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.486 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 00:17:38.486 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.486 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.486 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.486 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:38.486 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:38.486 13:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:39.422 nvme0n1 00:17:39.422 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:39.422 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:39.422 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.681 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.681 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:39.681 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.681 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.681 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.681 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:39.681 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.681 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:39.942 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.942 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:39.942 13:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid f9261117-b8af-4744-8c64-783a612116ce -l 0 --dhchap-secret DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: --dhchap-ctrl-secret DHHC-1:03:NzBjOWU4MmFkYzQwMTRkMWMwN2RkN2Y0NDZmNThiMjU2NDI5NDJiZTc1YzMyZTM5ZWE2NDZhZjE4NGU4NjY0ZH3objQ=: 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.510 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:40.769 13:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:41.337 2024/11/06 13:47:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:17:41.337 request: 00:17:41.337 { 00:17:41.337 "method": "bdev_nvme_attach_controller", 00:17:41.337 "params": { 00:17:41.337 "name": "nvme0", 00:17:41.337 "trtype": "tcp", 00:17:41.337 "traddr": "10.0.0.3", 00:17:41.337 "adrfam": "ipv4", 00:17:41.337 "trsvcid": "4420", 00:17:41.337 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:41.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce", 00:17:41.337 "prchk_reftag": false, 00:17:41.337 "prchk_guard": false, 00:17:41.337 "hdgst": false, 00:17:41.337 "ddgst": false, 00:17:41.337 "dhchap_key": "key1", 00:17:41.337 "allow_unrecognized_csi": false 00:17:41.337 } 00:17:41.337 } 00:17:41.338 Got JSON-RPC error response 00:17:41.338 GoRPCClient: error on JSON-RPC call 00:17:41.338 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:41.338 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.338 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.338 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.338 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:41.338 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:41.338 13:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:42.275 nvme0n1 00:17:42.275 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:42.275 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:42.275 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:42.535 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:43.104 nvme0n1 00:17:43.104 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:43.104 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.104 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:43.104 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.104 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.104 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: '' 2s 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: ]] 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWM1OTc1N2VkZjkyOWFjZGZiZWM4N2FjZTVjZTI0Y2EEGEqN: 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:43.364 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: 2s 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: 00:17:45.909 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:45.910 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:45.910 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:45.910 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: ]] 00:17:45.910 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDE5NzFiMGM4Y2Q4Nzk5MTYzODRmMzE0YzJmOTE1NTIyOTA4NDA4NDRlNWU5MTU3lgvkOA==: 00:17:45.910 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:45.910 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:47.815 13:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:48.383 nvme0n1 00:17:48.384 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:48.384 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.384 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.384 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.384 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:48.384 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:48.951 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:48.951 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.951 13:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:49.209 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.209 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:49.209 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.209 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.209 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.209 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:49.209 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:49.468 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:49.468 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:49.468 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:49.728 13:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:50.297 2024/11/06 13:47:31 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:17:50.297 request: 00:17:50.297 { 00:17:50.297 "method": "bdev_nvme_set_keys", 00:17:50.297 "params": { 00:17:50.297 "name": "nvme0", 00:17:50.297 "dhchap_key": "key1", 00:17:50.297 "dhchap_ctrlr_key": "key3" 00:17:50.297 } 00:17:50.297 } 00:17:50.297 Got JSON-RPC error response 00:17:50.297 GoRPCClient: error on JSON-RPC call 00:17:50.297 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:50.297 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.297 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.297 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.297 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:50.297 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:50.297 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.556 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:50.556 13:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:51.492 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:51.492 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:51.492 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.751 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:51.751 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.751 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.751 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.751 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.751 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:51.751 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:51.751 13:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:52.689 nvme0n1 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:52.689 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:53.257 2024/11/06 13:47:33 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:17:53.257 request: 00:17:53.257 { 00:17:53.257 "method": "bdev_nvme_set_keys", 00:17:53.257 "params": { 00:17:53.257 "name": "nvme0", 00:17:53.257 "dhchap_key": "key2", 00:17:53.257 "dhchap_ctrlr_key": "key0" 00:17:53.257 } 00:17:53.257 } 00:17:53.257 Got JSON-RPC error response 00:17:53.257 GoRPCClient: error on JSON-RPC call 00:17:53.257 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:53.257 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.257 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.257 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.258 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:53.258 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.258 13:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:53.517 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:53.517 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:54.453 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:54.453 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.453 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77016 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 77016 ']' 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 77016 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77016 00:17:54.712 killing process with pid 77016 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77016' 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 77016 00:17:54.712 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 77016 00:17:54.972 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:54.972 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.972 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:54.972 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.972 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:54.972 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.972 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.972 rmmod nvme_tcp 00:17:54.972 rmmod nvme_fabrics 00:17:54.972 rmmod nvme_keyring 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81695 ']' 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81695 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 81695 ']' 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 81695 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81695 00:17:55.231 killing process with pid 81695 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81695' 00:17:55.231 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 81695 00:17:55.232 13:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 81695 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:55.491 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.j0i /tmp/spdk.key-sha256.Gp2 /tmp/spdk.key-sha384.gx1 /tmp/spdk.key-sha512.3fS /tmp/spdk.key-sha512.zdK /tmp/spdk.key-sha384.52W /tmp/spdk.key-sha256.GjI '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:17:55.751 ************************************ 00:17:55.751 END TEST nvmf_auth_target 00:17:55.751 ************************************ 00:17:55.751 00:17:55.751 real 2m51.591s 00:17:55.751 user 6m39.730s 00:17:55.751 sys 0m34.287s 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.751 ************************************ 00:17:55.751 START TEST nvmf_bdevio_no_huge 00:17:55.751 ************************************ 00:17:55.751 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:56.012 * Looking for test storage... 00:17:56.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:56.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.012 --rc genhtml_branch_coverage=1 00:17:56.012 --rc genhtml_function_coverage=1 00:17:56.012 --rc genhtml_legend=1 00:17:56.012 --rc geninfo_all_blocks=1 00:17:56.012 --rc geninfo_unexecuted_blocks=1 00:17:56.012 00:17:56.012 ' 00:17:56.012 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:56.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.012 --rc genhtml_branch_coverage=1 00:17:56.012 --rc genhtml_function_coverage=1 00:17:56.012 --rc genhtml_legend=1 00:17:56.012 --rc geninfo_all_blocks=1 00:17:56.012 --rc geninfo_unexecuted_blocks=1 00:17:56.012 00:17:56.012 ' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:56.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.013 --rc genhtml_branch_coverage=1 00:17:56.013 --rc genhtml_function_coverage=1 00:17:56.013 --rc genhtml_legend=1 00:17:56.013 --rc geninfo_all_blocks=1 00:17:56.013 --rc geninfo_unexecuted_blocks=1 00:17:56.013 00:17:56.013 ' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:56.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.013 --rc genhtml_branch_coverage=1 00:17:56.013 --rc genhtml_function_coverage=1 00:17:56.013 --rc genhtml_legend=1 00:17:56.013 --rc geninfo_all_blocks=1 00:17:56.013 --rc geninfo_unexecuted_blocks=1 00:17:56.013 00:17:56.013 ' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:56.013 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:56.013 Cannot find device "nvmf_init_br" 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:56.013 Cannot find device "nvmf_init_br2" 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:17:56.013 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:56.273 Cannot find device "nvmf_tgt_br" 00:17:56.273 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:17:56.273 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:56.273 Cannot find device "nvmf_tgt_br2" 00:17:56.273 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:17:56.273 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:56.273 Cannot find device "nvmf_init_br" 00:17:56.273 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:17:56.273 13:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:56.273 Cannot find device "nvmf_init_br2" 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:56.273 Cannot find device "nvmf_tgt_br" 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:56.273 Cannot find device "nvmf_tgt_br2" 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:56.273 Cannot find device "nvmf_br" 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:56.273 Cannot find device "nvmf_init_if" 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:56.273 Cannot find device "nvmf_init_if2" 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:56.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:56.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:56.273 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:56.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:56.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:17:56.533 00:17:56.533 --- 10.0.0.3 ping statistics --- 00:17:56.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.533 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:56.533 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:56.533 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.118 ms 00:17:56.533 00:17:56.533 --- 10.0.0.4 ping statistics --- 00:17:56.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.533 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:56.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:17:56.533 00:17:56.533 --- 10.0.0.1 ping statistics --- 00:17:56.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.533 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:56.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:17:56.533 00:17:56.533 --- 10.0.0.2 ping statistics --- 00:17:56.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.533 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:56.533 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82546 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82546 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 82546 ']' 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:56.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:56.534 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.534 [2024-11-06 13:47:37.460383] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:17:56.534 [2024-11-06 13:47:37.460459] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:56.793 [2024-11-06 13:47:37.641994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.793 [2024-11-06 13:47:37.715557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.793 [2024-11-06 13:47:37.715836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.793 [2024-11-06 13:47:37.715855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.793 [2024-11-06 13:47:37.715878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.793 [2024-11-06 13:47:37.715885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.793 [2024-11-06 13:47:37.716561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:56.793 [2024-11-06 13:47:37.716934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:56.793 [2024-11-06 13:47:37.717111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:56.793 [2024-11-06 13:47:37.717113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.731 [2024-11-06 13:47:38.427575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.731 Malloc0 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.731 [2024-11-06 13:47:38.479827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:57.731 { 00:17:57.731 "params": { 00:17:57.731 "name": "Nvme$subsystem", 00:17:57.731 "trtype": "$TEST_TRANSPORT", 00:17:57.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:57.731 "adrfam": "ipv4", 00:17:57.731 "trsvcid": "$NVMF_PORT", 00:17:57.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:57.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:57.731 "hdgst": ${hdgst:-false}, 00:17:57.731 "ddgst": ${ddgst:-false} 00:17:57.731 }, 00:17:57.731 "method": "bdev_nvme_attach_controller" 00:17:57.731 } 00:17:57.731 EOF 00:17:57.731 )") 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:57.731 13:47:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:57.731 "params": { 00:17:57.731 "name": "Nvme1", 00:17:57.731 "trtype": "tcp", 00:17:57.731 "traddr": "10.0.0.3", 00:17:57.731 "adrfam": "ipv4", 00:17:57.731 "trsvcid": "4420", 00:17:57.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.731 "hdgst": false, 00:17:57.731 "ddgst": false 00:17:57.731 }, 00:17:57.731 "method": "bdev_nvme_attach_controller" 00:17:57.731 }' 00:17:57.731 [2024-11-06 13:47:38.540533] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:17:57.732 [2024-11-06 13:47:38.540774] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82600 ] 00:17:57.991 [2024-11-06 13:47:38.697234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.991 [2024-11-06 13:47:38.765120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.991 [2024-11-06 13:47:38.765301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.991 [2024-11-06 13:47:38.765303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.251 I/O targets: 00:17:58.251 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:58.251 00:17:58.251 00:17:58.251 CUnit - A unit testing framework for C - Version 2.1-3 00:17:58.251 http://cunit.sourceforge.net/ 00:17:58.251 00:17:58.251 00:17:58.251 Suite: bdevio tests on: Nvme1n1 00:17:58.251 Test: blockdev write read block ...passed 00:17:58.251 Test: blockdev write zeroes read block ...passed 00:17:58.251 Test: blockdev write zeroes read no split ...passed 00:17:58.251 Test: blockdev write zeroes read split ...passed 00:17:58.251 Test: blockdev write zeroes read split partial ...passed 00:17:58.251 Test: blockdev reset ...[2024-11-06 13:47:39.117003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:58.251 [2024-11-06 13:47:39.117446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1303380 (9): Bad file descriptor 00:17:58.251 [2024-11-06 13:47:39.128417] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:58.251 passed 00:17:58.251 Test: blockdev write read 8 blocks ...passed 00:17:58.251 Test: blockdev write read size > 128k ...passed 00:17:58.251 Test: blockdev write read invalid size ...passed 00:17:58.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:58.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:58.251 Test: blockdev write read max offset ...passed 00:17:58.510 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:58.510 Test: blockdev writev readv 8 blocks ...passed 00:17:58.510 Test: blockdev writev readv 30 x 1block ...passed 00:17:58.510 Test: blockdev writev readv block ...passed 00:17:58.510 Test: blockdev writev readv size > 128k ...passed 00:17:58.510 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:58.510 Test: blockdev comparev and writev ...[2024-11-06 13:47:39.299941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:58.510 [2024-11-06 13:47:39.300123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.300147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:58.510 [2024-11-06 13:47:39.300158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.300455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:58.510 [2024-11-06 13:47:39.300468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.300482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:58.510 [2024-11-06 13:47:39.300492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.300752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:58.510 [2024-11-06 13:47:39.300765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.300779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:58.510 [2024-11-06 13:47:39.300788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.301065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:58.510 [2024-11-06 13:47:39.301078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.301093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:58.510 [2024-11-06 13:47:39.301102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:58.510 passed 00:17:58.510 Test: blockdev nvme passthru rw ...passed 00:17:58.510 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:47:39.383501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:58.510 [2024-11-06 13:47:39.383537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.383644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:58.510 [2024-11-06 13:47:39.383656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:58.510 [2024-11-06 13:47:39.383750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:58.510 [2024-11-06 13:47:39.383762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:58.510 passed 00:17:58.510 Test: blockdev nvme admin passthru ...[2024-11-06 13:47:39.383846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:58.510 [2024-11-06 13:47:39.383857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:58.510 passed 00:17:58.769 Test: blockdev copy ...passed 00:17:58.769 00:17:58.769 Run Summary: Type Total Ran Passed Failed Inactive 00:17:58.769 suites 1 1 n/a 0 0 00:17:58.769 tests 23 23 23 0 0 00:17:58.769 asserts 152 152 152 0 n/a 00:17:58.769 00:17:58.769 Elapsed time = 0.930 seconds 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.028 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.028 rmmod nvme_tcp 00:17:59.028 rmmod nvme_fabrics 00:17:59.028 rmmod nvme_keyring 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82546 ']' 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82546 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 82546 ']' 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 82546 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:59.294 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82546 00:17:59.294 killing process with pid 82546 00:17:59.294 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:17:59.294 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:17:59.294 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82546' 00:17:59.294 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 82546 00:17:59.294 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 82546 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.870 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.130 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:18:00.130 ************************************ 00:18:00.130 END TEST nvmf_bdevio_no_huge 00:18:00.130 ************************************ 00:18:00.130 00:18:00.130 real 0m4.238s 00:18:00.130 user 0m13.136s 00:18:00.130 sys 0m1.888s 00:18:00.130 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:00.130 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:00.130 13:47:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:00.130 13:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:00.130 13:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:00.130 13:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.130 ************************************ 00:18:00.130 START TEST nvmf_tls 00:18:00.130 ************************************ 00:18:00.130 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:00.130 * Looking for test storage... 00:18:00.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:00.130 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:00.389 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:00.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.390 --rc genhtml_branch_coverage=1 00:18:00.390 --rc genhtml_function_coverage=1 00:18:00.390 --rc genhtml_legend=1 00:18:00.390 --rc geninfo_all_blocks=1 00:18:00.390 --rc geninfo_unexecuted_blocks=1 00:18:00.390 00:18:00.390 ' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:00.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.390 --rc genhtml_branch_coverage=1 00:18:00.390 --rc genhtml_function_coverage=1 00:18:00.390 --rc genhtml_legend=1 00:18:00.390 --rc geninfo_all_blocks=1 00:18:00.390 --rc geninfo_unexecuted_blocks=1 00:18:00.390 00:18:00.390 ' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:00.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.390 --rc genhtml_branch_coverage=1 00:18:00.390 --rc genhtml_function_coverage=1 00:18:00.390 --rc genhtml_legend=1 00:18:00.390 --rc geninfo_all_blocks=1 00:18:00.390 --rc geninfo_unexecuted_blocks=1 00:18:00.390 00:18:00.390 ' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:00.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.390 --rc genhtml_branch_coverage=1 00:18:00.390 --rc genhtml_function_coverage=1 00:18:00.390 --rc genhtml_legend=1 00:18:00.390 --rc geninfo_all_blocks=1 00:18:00.390 --rc geninfo_unexecuted_blocks=1 00:18:00.390 00:18:00.390 ' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.390 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.390 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.391 Cannot find device "nvmf_init_br" 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.391 Cannot find device "nvmf_init_br2" 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.391 Cannot find device "nvmf_tgt_br" 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.391 Cannot find device "nvmf_tgt_br2" 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.391 Cannot find device "nvmf_init_br" 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:18:00.391 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.649 Cannot find device "nvmf_init_br2" 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.649 Cannot find device "nvmf_tgt_br" 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.649 Cannot find device "nvmf_tgt_br2" 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.649 Cannot find device "nvmf_br" 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.649 Cannot find device "nvmf_init_if" 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:00.649 Cannot find device "nvmf_init_if2" 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.649 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:00.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:18:00.908 00:18:00.908 --- 10.0.0.3 ping statistics --- 00:18:00.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.908 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:00.908 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:00.908 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:18:00.908 00:18:00.908 --- 10.0.0.4 ping statistics --- 00:18:00.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.908 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:18:00.908 00:18:00.908 --- 10.0.0.1 ping statistics --- 00:18:00.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.908 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:00.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:18:00.908 00:18:00.908 --- 10.0.0.2 ping statistics --- 00:18:00.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.908 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82838 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82838 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 82838 ']' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.908 13:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:01.167 [2024-11-06 13:47:41.844250] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:01.167 [2024-11-06 13:47:41.844314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.167 [2024-11-06 13:47:41.995737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.167 [2024-11-06 13:47:42.069638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.168 [2024-11-06 13:47:42.069694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.168 [2024-11-06 13:47:42.069704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.168 [2024-11-06 13:47:42.069713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.168 [2024-11-06 13:47:42.069720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.168 [2024-11-06 13:47:42.070116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.104 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:02.104 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:02.104 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.104 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.104 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.104 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.104 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:02.104 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:02.104 true 00:18:02.363 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:02.363 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:02.364 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:02.364 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:02.364 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:02.622 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:02.622 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:02.882 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:02.882 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:02.882 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:03.141 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.141 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:03.401 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:03.401 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:03.401 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.401 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:03.660 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:03.660 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:03.660 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:03.924 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.924 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:03.924 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:03.924 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:03.924 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:04.199 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:04.199 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.QuicOnWvSi 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.5SZlbHaX81 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:04.459 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:04.718 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QuicOnWvSi 00:18:04.718 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.5SZlbHaX81 00:18:04.718 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:04.718 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:05.287 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.QuicOnWvSi 00:18:05.287 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QuicOnWvSi 00:18:05.287 13:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.287 [2024-11-06 13:47:46.164552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.287 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.546 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:05.805 [2024-11-06 13:47:46.607902] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.805 [2024-11-06 13:47:46.608160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:05.805 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.065 malloc0 00:18:06.065 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.326 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QuicOnWvSi 00:18:06.589 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:06.847 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QuicOnWvSi 00:18:16.865 Initializing NVMe Controllers 00:18:16.865 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:16.865 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:16.865 Initialization complete. Launching workers. 00:18:16.865 ======================================================== 00:18:16.865 Latency(us) 00:18:16.865 Device Information : IOPS MiB/s Average min max 00:18:16.865 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13789.99 53.87 4641.69 923.42 17160.45 00:18:16.865 ======================================================== 00:18:16.865 Total : 13789.99 53.87 4641.69 923.42 17160.45 00:18:16.865 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QuicOnWvSi 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QuicOnWvSi 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83206 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83206 /var/tmp/bdevperf.sock 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83206 ']' 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.865 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.865 [2024-11-06 13:47:57.793627] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:16.865 [2024-11-06 13:47:57.793697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83206 ] 00:18:17.124 [2024-11-06 13:47:57.945877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.124 [2024-11-06 13:47:57.990098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.060 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:18.060 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:18.060 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QuicOnWvSi 00:18:18.060 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.319 [2024-11-06 13:47:59.054673] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.319 TLSTESTn1 00:18:18.319 13:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:18.319 Running I/O for 10 seconds... 00:18:20.632 5556.00 IOPS, 21.70 MiB/s [2024-11-06T13:48:02.501Z] 5494.50 IOPS, 21.46 MiB/s [2024-11-06T13:48:03.448Z] 5583.33 IOPS, 21.81 MiB/s [2024-11-06T13:48:04.422Z] 5621.25 IOPS, 21.96 MiB/s [2024-11-06T13:48:05.360Z] 5653.60 IOPS, 22.08 MiB/s [2024-11-06T13:48:06.298Z] 5657.67 IOPS, 22.10 MiB/s [2024-11-06T13:48:07.677Z] 5672.86 IOPS, 22.16 MiB/s [2024-11-06T13:48:08.245Z] 5677.50 IOPS, 22.18 MiB/s [2024-11-06T13:48:09.637Z] 5685.67 IOPS, 22.21 MiB/s [2024-11-06T13:48:09.637Z] 5687.20 IOPS, 22.22 MiB/s 00:18:28.704 Latency(us) 00:18:28.704 [2024-11-06T13:48:09.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.704 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:28.704 Verification LBA range: start 0x0 length 0x2000 00:18:28.704 TLSTESTn1 : 10.01 5692.84 22.24 0.00 0.00 22450.27 3842.67 30951.94 00:18:28.704 [2024-11-06T13:48:09.637Z] =================================================================================================================== 00:18:28.704 [2024-11-06T13:48:09.637Z] Total : 5692.84 22.24 0.00 0.00 22450.27 3842.67 30951.94 00:18:28.704 { 00:18:28.704 "results": [ 00:18:28.704 { 00:18:28.704 "job": "TLSTESTn1", 00:18:28.704 "core_mask": "0x4", 00:18:28.704 "workload": "verify", 00:18:28.704 "status": "finished", 00:18:28.704 "verify_range": { 00:18:28.704 "start": 0, 00:18:28.704 "length": 8192 00:18:28.704 }, 00:18:28.704 "queue_depth": 128, 00:18:28.704 "io_size": 4096, 00:18:28.704 "runtime": 10.01222, 00:18:28.704 "iops": 5692.843345431882, 00:18:28.704 "mibps": 22.23766931809329, 00:18:28.704 "io_failed": 0, 00:18:28.704 "io_timeout": 0, 00:18:28.704 "avg_latency_us": 22450.265275904134, 00:18:28.704 "min_latency_us": 3842.673092369478, 00:18:28.704 "max_latency_us": 30951.942168674697 00:18:28.704 } 00:18:28.704 ], 00:18:28.704 "core_count": 1 00:18:28.704 } 00:18:28.704 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:28.704 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83206 00:18:28.704 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83206 ']' 00:18:28.704 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83206 00:18:28.704 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:28.704 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.704 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83206 00:18:28.704 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83206' 00:18:28.705 killing process with pid 83206 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83206 00:18:28.705 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.705 00:18:28.705 Latency(us) 00:18:28.705 [2024-11-06T13:48:09.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.705 [2024-11-06T13:48:09.638Z] =================================================================================================================== 00:18:28.705 [2024-11-06T13:48:09.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83206 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5SZlbHaX81 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5SZlbHaX81 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5SZlbHaX81 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5SZlbHaX81 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83364 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83364 /var/tmp/bdevperf.sock 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83364 ']' 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:28.705 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.962 [2024-11-06 13:48:09.650494] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:28.962 [2024-11-06 13:48:09.650587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83364 ] 00:18:28.962 [2024-11-06 13:48:09.803196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.962 [2024-11-06 13:48:09.870786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.899 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:29.899 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:29.899 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5SZlbHaX81 00:18:29.899 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.158 [2024-11-06 13:48:10.968786] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.159 [2024-11-06 13:48:10.978768] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:30.159 [2024-11-06 13:48:10.979420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4dac0 (107): Transport endpoint is not connected 00:18:30.159 [2024-11-06 13:48:10.980407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4dac0 (9): Bad file descriptor 00:18:30.159 [2024-11-06 13:48:10.981402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:30.159 [2024-11-06 13:48:10.981427] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:30.159 [2024-11-06 13:48:10.981438] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:30.159 [2024-11-06 13:48:10.981454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:30.159 2024/11/06 13:48:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:30.159 request: 00:18:30.159 { 00:18:30.159 "method": "bdev_nvme_attach_controller", 00:18:30.159 "params": { 00:18:30.159 "name": "TLSTEST", 00:18:30.159 "trtype": "tcp", 00:18:30.159 "traddr": "10.0.0.3", 00:18:30.159 "adrfam": "ipv4", 00:18:30.159 "trsvcid": "4420", 00:18:30.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.159 "prchk_reftag": false, 00:18:30.159 "prchk_guard": false, 00:18:30.159 "hdgst": false, 00:18:30.159 "ddgst": false, 00:18:30.159 "psk": "key0", 00:18:30.159 "allow_unrecognized_csi": false 00:18:30.159 } 00:18:30.159 } 00:18:30.159 Got JSON-RPC error response 00:18:30.159 GoRPCClient: error on JSON-RPC call 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83364 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83364 ']' 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83364 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83364 00:18:30.159 killing process with pid 83364 00:18:30.159 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.159 00:18:30.159 Latency(us) 00:18:30.159 [2024-11-06T13:48:11.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.159 [2024-11-06T13:48:11.092Z] =================================================================================================================== 00:18:30.159 [2024-11-06T13:48:11.092Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83364' 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83364 00:18:30.159 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83364 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QuicOnWvSi 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QuicOnWvSi 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QuicOnWvSi 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QuicOnWvSi 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83417 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83417 /var/tmp/bdevperf.sock 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83417 ']' 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:30.419 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.679 [2024-11-06 13:48:11.379274] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:30.679 [2024-11-06 13:48:11.379774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83417 ] 00:18:30.679 [2024-11-06 13:48:11.532132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.679 [2024-11-06 13:48:11.600076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.617 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.617 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:31.617 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QuicOnWvSi 00:18:31.617 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:31.876 [2024-11-06 13:48:12.707693] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.876 [2024-11-06 13:48:12.716084] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:31.876 [2024-11-06 13:48:12.716133] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:31.876 [2024-11-06 13:48:12.716195] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:31.876 [2024-11-06 13:48:12.717147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f8ac0 (107): Transport endpoint is not connected 00:18:31.876 [2024-11-06 13:48:12.718132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f8ac0 (9): Bad file descriptor 00:18:31.876 [2024-11-06 13:48:12.719129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:31.876 [2024-11-06 13:48:12.719153] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:31.876 [2024-11-06 13:48:12.719162] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:31.876 [2024-11-06 13:48:12.719177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:31.876 2024/11/06 13:48:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:31.876 request: 00:18:31.876 { 00:18:31.876 "method": "bdev_nvme_attach_controller", 00:18:31.876 "params": { 00:18:31.876 "name": "TLSTEST", 00:18:31.876 "trtype": "tcp", 00:18:31.876 "traddr": "10.0.0.3", 00:18:31.876 "adrfam": "ipv4", 00:18:31.876 "trsvcid": "4420", 00:18:31.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:31.876 "prchk_reftag": false, 00:18:31.876 "prchk_guard": false, 00:18:31.876 "hdgst": false, 00:18:31.876 "ddgst": false, 00:18:31.876 "psk": "key0", 00:18:31.876 "allow_unrecognized_csi": false 00:18:31.876 } 00:18:31.876 } 00:18:31.876 Got JSON-RPC error response 00:18:31.876 GoRPCClient: error on JSON-RPC call 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83417 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83417 ']' 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83417 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83417 00:18:31.876 killing process with pid 83417 00:18:31.876 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.876 00:18:31.876 Latency(us) 00:18:31.876 [2024-11-06T13:48:12.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.876 [2024-11-06T13:48:12.809Z] =================================================================================================================== 00:18:31.876 [2024-11-06T13:48:12.809Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83417' 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83417 00:18:31.876 13:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83417 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QuicOnWvSi 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QuicOnWvSi 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QuicOnWvSi 00:18:32.135 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:32.136 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:32.136 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:32.136 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QuicOnWvSi 00:18:32.136 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.136 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83469 00:18:32.136 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.136 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.136 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83469 /var/tmp/bdevperf.sock 00:18:32.395 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83469 ']' 00:18:32.395 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.395 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:32.395 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.395 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:32.395 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.395 [2024-11-06 13:48:13.129991] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:32.395 [2024-11-06 13:48:13.130081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83469 ] 00:18:32.395 [2024-11-06 13:48:13.268383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.696 [2024-11-06 13:48:13.341620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.264 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.264 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:33.264 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QuicOnWvSi 00:18:33.523 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.782 [2024-11-06 13:48:14.490112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.782 [2024-11-06 13:48:14.494969] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:33.782 [2024-11-06 13:48:14.495016] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:33.782 [2024-11-06 13:48:14.495081] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:33.782 [2024-11-06 13:48:14.495702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe14ac0 (107): Transport endpoint is not connected 00:18:33.782 [2024-11-06 13:48:14.496686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe14ac0 (9): Bad file descriptor 00:18:33.782 [2024-11-06 13:48:14.497683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:33.782 [2024-11-06 13:48:14.497705] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:33.782 [2024-11-06 13:48:14.497716] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:33.782 [2024-11-06 13:48:14.497731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:33.782 2024/11/06 13:48:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:33.782 request: 00:18:33.782 { 00:18:33.782 "method": "bdev_nvme_attach_controller", 00:18:33.782 "params": { 00:18:33.782 "name": "TLSTEST", 00:18:33.782 "trtype": "tcp", 00:18:33.782 "traddr": "10.0.0.3", 00:18:33.782 "adrfam": "ipv4", 00:18:33.782 "trsvcid": "4420", 00:18:33.782 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:33.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.782 "prchk_reftag": false, 00:18:33.782 "prchk_guard": false, 00:18:33.782 "hdgst": false, 00:18:33.782 "ddgst": false, 00:18:33.782 "psk": "key0", 00:18:33.782 "allow_unrecognized_csi": false 00:18:33.782 } 00:18:33.782 } 00:18:33.782 Got JSON-RPC error response 00:18:33.782 GoRPCClient: error on JSON-RPC call 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83469 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83469 ']' 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83469 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83469 00:18:33.782 killing process with pid 83469 00:18:33.782 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.782 00:18:33.782 Latency(us) 00:18:33.782 [2024-11-06T13:48:14.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.782 [2024-11-06T13:48:14.715Z] =================================================================================================================== 00:18:33.782 [2024-11-06T13:48:14.715Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83469' 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83469 00:18:33.782 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83469 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83522 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83522 /var/tmp/bdevperf.sock 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83522 ']' 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:34.042 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.042 [2024-11-06 13:48:14.895435] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:34.042 [2024-11-06 13:48:14.895515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83522 ] 00:18:34.301 [2024-11-06 13:48:15.048951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.301 [2024-11-06 13:48:15.120615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.869 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:34.869 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:34.869 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:35.128 [2024-11-06 13:48:15.973677] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:35.128 [2024-11-06 13:48:15.973733] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:35.128 2024/11/06 13:48:15 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:18:35.128 request: 00:18:35.128 { 00:18:35.128 "method": "keyring_file_add_key", 00:18:35.128 "params": { 00:18:35.128 "name": "key0", 00:18:35.128 "path": "" 00:18:35.128 } 00:18:35.128 } 00:18:35.128 Got JSON-RPC error response 00:18:35.128 GoRPCClient: error on JSON-RPC call 00:18:35.128 13:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:35.387 [2024-11-06 13:48:16.253391] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.387 [2024-11-06 13:48:16.253451] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:35.387 2024/11/06 13:48:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:18:35.387 request: 00:18:35.387 { 00:18:35.387 "method": "bdev_nvme_attach_controller", 00:18:35.387 "params": { 00:18:35.387 "name": "TLSTEST", 00:18:35.387 "trtype": "tcp", 00:18:35.387 "traddr": "10.0.0.3", 00:18:35.387 "adrfam": "ipv4", 00:18:35.387 "trsvcid": "4420", 00:18:35.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.387 "prchk_reftag": false, 00:18:35.387 "prchk_guard": false, 00:18:35.387 "hdgst": false, 00:18:35.387 "ddgst": false, 00:18:35.387 "psk": "key0", 00:18:35.387 "allow_unrecognized_csi": false 00:18:35.387 } 00:18:35.387 } 00:18:35.387 Got JSON-RPC error response 00:18:35.387 GoRPCClient: error on JSON-RPC call 00:18:35.387 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83522 00:18:35.387 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83522 ']' 00:18:35.387 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83522 00:18:35.387 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:35.388 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:35.388 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83522 00:18:35.647 killing process with pid 83522 00:18:35.647 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.647 00:18:35.647 Latency(us) 00:18:35.647 [2024-11-06T13:48:16.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.647 [2024-11-06T13:48:16.580Z] =================================================================================================================== 00:18:35.647 [2024-11-06T13:48:16.580Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83522' 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83522 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83522 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82838 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 82838 ']' 00:18:35.647 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 82838 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82838 00:18:35.907 killing process with pid 82838 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82838' 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 82838 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 82838 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:35.907 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.pyYf7ozTxW 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.pyYf7ozTxW 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83590 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83590 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83590 ']' 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.166 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.167 [2024-11-06 13:48:16.919566] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:36.167 [2024-11-06 13:48:16.919632] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.167 [2024-11-06 13:48:17.073075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.425 [2024-11-06 13:48:17.114998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.425 [2024-11-06 13:48:17.115042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.425 [2024-11-06 13:48:17.115052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.425 [2024-11-06 13:48:17.115061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.425 [2024-11-06 13:48:17.115068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.425 [2024-11-06 13:48:17.115349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.pyYf7ozTxW 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pyYf7ozTxW 00:18:36.994 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.253 [2024-11-06 13:48:18.052385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.253 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:37.512 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:37.771 [2024-11-06 13:48:18.491929] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.771 [2024-11-06 13:48:18.492133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:37.771 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.030 malloc0 00:18:38.030 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:38.030 13:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:18:38.290 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pyYf7ozTxW 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pyYf7ozTxW 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83695 00:18:38.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83695 /var/tmp/bdevperf.sock 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83695 ']' 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:38.553 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.553 [2024-11-06 13:48:19.413711] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:38.553 [2024-11-06 13:48:19.414393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83695 ] 00:18:38.816 [2024-11-06 13:48:19.565749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.816 [2024-11-06 13:48:19.637846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.383 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:39.383 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:39.383 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:18:39.643 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:39.902 [2024-11-06 13:48:20.703349] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.902 TLSTESTn1 00:18:39.902 13:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:40.160 Running I/O for 10 seconds... 00:18:42.038 5380.00 IOPS, 21.02 MiB/s [2024-11-06T13:48:23.911Z] 5352.00 IOPS, 20.91 MiB/s [2024-11-06T13:48:25.290Z] 5283.67 IOPS, 20.64 MiB/s [2024-11-06T13:48:26.227Z] 5427.25 IOPS, 21.20 MiB/s [2024-11-06T13:48:27.167Z] 5507.20 IOPS, 21.51 MiB/s [2024-11-06T13:48:28.170Z] 5553.00 IOPS, 21.69 MiB/s [2024-11-06T13:48:29.122Z] 5596.14 IOPS, 21.86 MiB/s [2024-11-06T13:48:30.060Z] 5627.62 IOPS, 21.98 MiB/s [2024-11-06T13:48:30.999Z] 5661.00 IOPS, 22.11 MiB/s [2024-11-06T13:48:30.999Z] 5682.80 IOPS, 22.20 MiB/s 00:18:50.066 Latency(us) 00:18:50.066 [2024-11-06T13:48:30.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.066 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:50.066 Verification LBA range: start 0x0 length 0x2000 00:18:50.066 TLSTESTn1 : 10.01 5689.04 22.22 0.00 0.00 22465.50 4263.79 27583.02 00:18:50.066 [2024-11-06T13:48:30.999Z] =================================================================================================================== 00:18:50.066 [2024-11-06T13:48:30.999Z] Total : 5689.04 22.22 0.00 0.00 22465.50 4263.79 27583.02 00:18:50.066 { 00:18:50.066 "results": [ 00:18:50.066 { 00:18:50.066 "job": "TLSTESTn1", 00:18:50.066 "core_mask": "0x4", 00:18:50.066 "workload": "verify", 00:18:50.066 "status": "finished", 00:18:50.066 "verify_range": { 00:18:50.066 "start": 0, 00:18:50.066 "length": 8192 00:18:50.066 }, 00:18:50.066 "queue_depth": 128, 00:18:50.066 "io_size": 4096, 00:18:50.066 "runtime": 10.01135, 00:18:50.066 "iops": 5689.042936267337, 00:18:50.066 "mibps": 22.222823969794284, 00:18:50.066 "io_failed": 0, 00:18:50.066 "io_timeout": 0, 00:18:50.066 "avg_latency_us": 22465.49783916634, 00:18:50.066 "min_latency_us": 4263.787951807229, 00:18:50.066 "max_latency_us": 27583.02329317269 00:18:50.066 } 00:18:50.066 ], 00:18:50.066 "core_count": 1 00:18:50.066 } 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83695 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83695 ']' 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83695 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83695 00:18:50.066 killing process with pid 83695 00:18:50.066 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.066 00:18:50.066 Latency(us) 00:18:50.066 [2024-11-06T13:48:30.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.066 [2024-11-06T13:48:30.999Z] =================================================================================================================== 00:18:50.066 [2024-11-06T13:48:30.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83695' 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83695 00:18:50.066 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83695 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.pyYf7ozTxW 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pyYf7ozTxW 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pyYf7ozTxW 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pyYf7ozTxW 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pyYf7ozTxW 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83859 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83859 /var/tmp/bdevperf.sock 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83859 ']' 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:50.327 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.587 [2024-11-06 13:48:31.291541] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:50.587 [2024-11-06 13:48:31.291772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83859 ] 00:18:50.587 [2024-11-06 13:48:31.425677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.587 [2024-11-06 13:48:31.498005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.534 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.534 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:51.534 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:18:51.534 [2024-11-06 13:48:32.384209] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pyYf7ozTxW': 0100666 00:18:51.534 [2024-11-06 13:48:32.384457] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:51.534 2024/11/06 13:48:32 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.pyYf7ozTxW], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:18:51.534 request: 00:18:51.534 { 00:18:51.534 "method": "keyring_file_add_key", 00:18:51.534 "params": { 00:18:51.534 "name": "key0", 00:18:51.534 "path": "/tmp/tmp.pyYf7ozTxW" 00:18:51.534 } 00:18:51.534 } 00:18:51.534 Got JSON-RPC error response 00:18:51.534 GoRPCClient: error on JSON-RPC call 00:18:51.534 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.794 [2024-11-06 13:48:32.592049] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.794 [2024-11-06 13:48:32.592105] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:51.794 2024/11/06 13:48:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:18:51.794 request: 00:18:51.794 { 00:18:51.794 "method": "bdev_nvme_attach_controller", 00:18:51.794 "params": { 00:18:51.794 "name": "TLSTEST", 00:18:51.794 "trtype": "tcp", 00:18:51.794 "traddr": "10.0.0.3", 00:18:51.794 "adrfam": "ipv4", 00:18:51.794 "trsvcid": "4420", 00:18:51.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.794 "prchk_reftag": false, 00:18:51.794 "prchk_guard": false, 00:18:51.794 "hdgst": false, 00:18:51.794 "ddgst": false, 00:18:51.794 "psk": "key0", 00:18:51.794 "allow_unrecognized_csi": false 00:18:51.794 } 00:18:51.794 } 00:18:51.794 Got JSON-RPC error response 00:18:51.794 GoRPCClient: error on JSON-RPC call 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83859 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83859 ']' 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83859 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83859 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:51.794 killing process with pid 83859 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83859' 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83859 00:18:51.794 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.794 00:18:51.794 Latency(us) 00:18:51.794 [2024-11-06T13:48:32.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.794 [2024-11-06T13:48:32.727Z] =================================================================================================================== 00:18:51.794 [2024-11-06T13:48:32.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.794 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83859 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83590 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83590 ']' 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83590 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83590 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:52.054 killing process with pid 83590 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83590' 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83590 00:18:52.054 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83590 00:18:52.313 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:52.313 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.313 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.313 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83922 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83922 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83922 ']' 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:52.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:52.574 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.574 [2024-11-06 13:48:33.309922] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:52.574 [2024-11-06 13:48:33.310003] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.574 [2024-11-06 13:48:33.462549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.834 [2024-11-06 13:48:33.527918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.834 [2024-11-06 13:48:33.527977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.834 [2024-11-06 13:48:33.527987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.834 [2024-11-06 13:48:33.527996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.834 [2024-11-06 13:48:33.528003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.834 [2024-11-06 13:48:33.528395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.404 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.404 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:53.404 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.pyYf7ozTxW 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pyYf7ozTxW 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.pyYf7ozTxW 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pyYf7ozTxW 00:18:53.405 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:53.665 [2024-11-06 13:48:34.455607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.665 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:53.925 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:54.186 [2024-11-06 13:48:34.871008] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.186 [2024-11-06 13:48:34.871259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:54.186 13:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:54.186 malloc0 00:18:54.186 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.445 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:18:54.704 [2024-11-06 13:48:35.528709] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pyYf7ozTxW': 0100666 00:18:54.704 [2024-11-06 13:48:35.528753] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:54.704 2024/11/06 13:48:35 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.pyYf7ozTxW], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:18:54.704 request: 00:18:54.704 { 00:18:54.704 "method": "keyring_file_add_key", 00:18:54.704 "params": { 00:18:54.704 "name": "key0", 00:18:54.704 "path": "/tmp/tmp.pyYf7ozTxW" 00:18:54.704 } 00:18:54.704 } 00:18:54.704 Got JSON-RPC error response 00:18:54.704 GoRPCClient: error on JSON-RPC call 00:18:54.704 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.963 [2024-11-06 13:48:35.736416] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:54.963 [2024-11-06 13:48:35.736472] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:54.963 2024/11/06 13:48:35 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:18:54.963 request: 00:18:54.963 { 00:18:54.963 "method": "nvmf_subsystem_add_host", 00:18:54.963 "params": { 00:18:54.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.963 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.963 "psk": "key0" 00:18:54.963 } 00:18:54.963 } 00:18:54.963 Got JSON-RPC error response 00:18:54.963 GoRPCClient: error on JSON-RPC call 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83922 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83922 ']' 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83922 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83922 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:54.963 killing process with pid 83922 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83922' 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83922 00:18:54.963 13:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83922 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.pyYf7ozTxW 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84041 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84041 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84041 ']' 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:55.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:55.223 13:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.223 [2024-11-06 13:48:36.149698] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:55.223 [2024-11-06 13:48:36.149769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.482 [2024-11-06 13:48:36.283967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.482 [2024-11-06 13:48:36.344327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.482 [2024-11-06 13:48:36.344379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.482 [2024-11-06 13:48:36.344389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.482 [2024-11-06 13:48:36.344398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.482 [2024-11-06 13:48:36.344404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.482 [2024-11-06 13:48:36.344753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.pyYf7ozTxW 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pyYf7ozTxW 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:56.420 [2024-11-06 13:48:37.286777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.420 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:56.679 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:56.938 [2024-11-06 13:48:37.694173] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.938 [2024-11-06 13:48:37.694413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:56.938 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:57.197 malloc0 00:18:57.197 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:57.457 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:18:57.457 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84145 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84145 /var/tmp/bdevperf.sock 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84145 ']' 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.831 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.831 [2024-11-06 13:48:38.626702] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:18:57.831 [2024-11-06 13:48:38.627008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84145 ] 00:18:58.091 [2024-11-06 13:48:38.780557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.091 [2024-11-06 13:48:38.838782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.658 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.658 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:58.658 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:18:58.917 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.177 [2024-11-06 13:48:39.876233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.177 TLSTESTn1 00:18:59.177 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:59.437 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:59.437 "subsystems": [ 00:18:59.437 { 00:18:59.437 "subsystem": "keyring", 00:18:59.437 "config": [ 00:18:59.437 { 00:18:59.437 "method": "keyring_file_add_key", 00:18:59.437 "params": { 00:18:59.437 "name": "key0", 00:18:59.437 "path": "/tmp/tmp.pyYf7ozTxW" 00:18:59.437 } 00:18:59.437 } 00:18:59.437 ] 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "subsystem": "iobuf", 00:18:59.437 "config": [ 00:18:59.437 { 00:18:59.437 "method": "iobuf_set_options", 00:18:59.437 "params": { 00:18:59.437 "enable_numa": false, 00:18:59.437 "large_bufsize": 135168, 00:18:59.437 "large_pool_count": 1024, 00:18:59.437 "small_bufsize": 8192, 00:18:59.437 "small_pool_count": 8192 00:18:59.437 } 00:18:59.437 } 00:18:59.437 ] 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "subsystem": "sock", 00:18:59.437 "config": [ 00:18:59.437 { 00:18:59.437 "method": "sock_set_default_impl", 00:18:59.437 "params": { 00:18:59.437 "impl_name": "posix" 00:18:59.437 } 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "method": "sock_impl_set_options", 00:18:59.437 "params": { 00:18:59.437 "enable_ktls": false, 00:18:59.437 "enable_placement_id": 0, 00:18:59.437 "enable_quickack": false, 00:18:59.437 "enable_recv_pipe": true, 00:18:59.437 "enable_zerocopy_send_client": false, 00:18:59.437 "enable_zerocopy_send_server": true, 00:18:59.437 "impl_name": "ssl", 00:18:59.437 "recv_buf_size": 4096, 00:18:59.437 "send_buf_size": 4096, 00:18:59.437 "tls_version": 0, 00:18:59.437 "zerocopy_threshold": 0 00:18:59.437 } 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "method": "sock_impl_set_options", 00:18:59.437 "params": { 00:18:59.437 "enable_ktls": false, 00:18:59.437 "enable_placement_id": 0, 00:18:59.437 "enable_quickack": false, 00:18:59.437 "enable_recv_pipe": true, 00:18:59.437 "enable_zerocopy_send_client": false, 00:18:59.437 "enable_zerocopy_send_server": true, 00:18:59.437 "impl_name": "posix", 00:18:59.437 "recv_buf_size": 2097152, 00:18:59.437 "send_buf_size": 2097152, 00:18:59.437 "tls_version": 0, 00:18:59.437 "zerocopy_threshold": 0 00:18:59.437 } 00:18:59.437 } 00:18:59.437 ] 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "subsystem": "vmd", 00:18:59.437 "config": [] 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "subsystem": "accel", 00:18:59.437 "config": [ 00:18:59.437 { 00:18:59.437 "method": "accel_set_options", 00:18:59.437 "params": { 00:18:59.437 "buf_count": 2048, 00:18:59.437 "large_cache_size": 16, 00:18:59.437 "sequence_count": 2048, 00:18:59.437 "small_cache_size": 128, 00:18:59.437 "task_count": 2048 00:18:59.437 } 00:18:59.437 } 00:18:59.437 ] 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "subsystem": "bdev", 00:18:59.437 "config": [ 00:18:59.437 { 00:18:59.437 "method": "bdev_set_options", 00:18:59.437 "params": { 00:18:59.437 "bdev_auto_examine": true, 00:18:59.437 "bdev_io_cache_size": 256, 00:18:59.437 "bdev_io_pool_size": 65535, 00:18:59.437 "iobuf_large_cache_size": 16, 00:18:59.437 "iobuf_small_cache_size": 128 00:18:59.437 } 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "method": "bdev_raid_set_options", 00:18:59.437 "params": { 00:18:59.437 "process_max_bandwidth_mb_sec": 0, 00:18:59.437 "process_window_size_kb": 1024 00:18:59.437 } 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "method": "bdev_iscsi_set_options", 00:18:59.437 "params": { 00:18:59.437 "timeout_sec": 30 00:18:59.437 } 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "method": "bdev_nvme_set_options", 00:18:59.437 "params": { 00:18:59.437 "action_on_timeout": "none", 00:18:59.437 "allow_accel_sequence": false, 00:18:59.437 "arbitration_burst": 0, 00:18:59.437 "bdev_retry_count": 3, 00:18:59.437 "ctrlr_loss_timeout_sec": 0, 00:18:59.437 "delay_cmd_submit": true, 00:18:59.437 "dhchap_dhgroups": [ 00:18:59.437 "null", 00:18:59.437 "ffdhe2048", 00:18:59.437 "ffdhe3072", 00:18:59.437 "ffdhe4096", 00:18:59.437 "ffdhe6144", 00:18:59.437 "ffdhe8192" 00:18:59.437 ], 00:18:59.437 "dhchap_digests": [ 00:18:59.437 "sha256", 00:18:59.437 "sha384", 00:18:59.437 "sha512" 00:18:59.437 ], 00:18:59.437 "disable_auto_failback": false, 00:18:59.437 "fast_io_fail_timeout_sec": 0, 00:18:59.437 "generate_uuids": false, 00:18:59.437 "high_priority_weight": 0, 00:18:59.437 "io_path_stat": false, 00:18:59.437 "io_queue_requests": 0, 00:18:59.437 "keep_alive_timeout_ms": 10000, 00:18:59.437 "low_priority_weight": 0, 00:18:59.437 "medium_priority_weight": 0, 00:18:59.437 "nvme_adminq_poll_period_us": 10000, 00:18:59.437 "nvme_error_stat": false, 00:18:59.437 "nvme_ioq_poll_period_us": 0, 00:18:59.437 "rdma_cm_event_timeout_ms": 0, 00:18:59.437 "rdma_max_cq_size": 0, 00:18:59.437 "rdma_srq_size": 0, 00:18:59.437 "reconnect_delay_sec": 0, 00:18:59.437 "timeout_admin_us": 0, 00:18:59.437 "timeout_us": 0, 00:18:59.437 "transport_ack_timeout": 0, 00:18:59.437 "transport_retry_count": 4, 00:18:59.437 "transport_tos": 0 00:18:59.437 } 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "method": "bdev_nvme_set_hotplug", 00:18:59.437 "params": { 00:18:59.437 "enable": false, 00:18:59.437 "period_us": 100000 00:18:59.437 } 00:18:59.437 }, 00:18:59.437 { 00:18:59.437 "method": "bdev_malloc_create", 00:18:59.437 "params": { 00:18:59.437 "block_size": 4096, 00:18:59.437 "dif_is_head_of_md": false, 00:18:59.437 "dif_pi_format": 0, 00:18:59.437 "dif_type": 0, 00:18:59.437 "md_size": 0, 00:18:59.438 "name": "malloc0", 00:18:59.438 "num_blocks": 8192, 00:18:59.438 "optimal_io_boundary": 0, 00:18:59.438 "physical_block_size": 4096, 00:18:59.438 "uuid": "57e3256a-0eed-449d-b007-07dfa655e73c" 00:18:59.438 } 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "method": "bdev_wait_for_examine" 00:18:59.438 } 00:18:59.438 ] 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "subsystem": "nbd", 00:18:59.438 "config": [] 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "subsystem": "scheduler", 00:18:59.438 "config": [ 00:18:59.438 { 00:18:59.438 "method": "framework_set_scheduler", 00:18:59.438 "params": { 00:18:59.438 "name": "static" 00:18:59.438 } 00:18:59.438 } 00:18:59.438 ] 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "subsystem": "nvmf", 00:18:59.438 "config": [ 00:18:59.438 { 00:18:59.438 "method": "nvmf_set_config", 00:18:59.438 "params": { 00:18:59.438 "admin_cmd_passthru": { 00:18:59.438 "identify_ctrlr": false 00:18:59.438 }, 00:18:59.438 "dhchap_dhgroups": [ 00:18:59.438 "null", 00:18:59.438 "ffdhe2048", 00:18:59.438 "ffdhe3072", 00:18:59.438 "ffdhe4096", 00:18:59.438 "ffdhe6144", 00:18:59.438 "ffdhe8192" 00:18:59.438 ], 00:18:59.438 "dhchap_digests": [ 00:18:59.438 "sha256", 00:18:59.438 "sha384", 00:18:59.438 "sha512" 00:18:59.438 ], 00:18:59.438 "discovery_filter": "match_any" 00:18:59.438 } 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "method": "nvmf_set_max_subsystems", 00:18:59.438 "params": { 00:18:59.438 "max_subsystems": 1024 00:18:59.438 } 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "method": "nvmf_set_crdt", 00:18:59.438 "params": { 00:18:59.438 "crdt1": 0, 00:18:59.438 "crdt2": 0, 00:18:59.438 "crdt3": 0 00:18:59.438 } 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "method": "nvmf_create_transport", 00:18:59.438 "params": { 00:18:59.438 "abort_timeout_sec": 1, 00:18:59.438 "ack_timeout": 0, 00:18:59.438 "buf_cache_size": 4294967295, 00:18:59.438 "c2h_success": false, 00:18:59.438 "data_wr_pool_size": 0, 00:18:59.438 "dif_insert_or_strip": false, 00:18:59.438 "in_capsule_data_size": 4096, 00:18:59.438 "io_unit_size": 131072, 00:18:59.438 "max_aq_depth": 128, 00:18:59.438 "max_io_qpairs_per_ctrlr": 127, 00:18:59.438 "max_io_size": 131072, 00:18:59.438 "max_queue_depth": 128, 00:18:59.438 "num_shared_buffers": 511, 00:18:59.438 "sock_priority": 0, 00:18:59.438 "trtype": "TCP", 00:18:59.438 "zcopy": false 00:18:59.438 } 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "method": "nvmf_create_subsystem", 00:18:59.438 "params": { 00:18:59.438 "allow_any_host": false, 00:18:59.438 "ana_reporting": false, 00:18:59.438 "max_cntlid": 65519, 00:18:59.438 "max_namespaces": 10, 00:18:59.438 "min_cntlid": 1, 00:18:59.438 "model_number": "SPDK bdev Controller", 00:18:59.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.438 "serial_number": "SPDK00000000000001" 00:18:59.438 } 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "method": "nvmf_subsystem_add_host", 00:18:59.438 "params": { 00:18:59.438 "host": "nqn.2016-06.io.spdk:host1", 00:18:59.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.438 "psk": "key0" 00:18:59.438 } 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "method": "nvmf_subsystem_add_ns", 00:18:59.438 "params": { 00:18:59.438 "namespace": { 00:18:59.438 "bdev_name": "malloc0", 00:18:59.438 "nguid": "57E3256A0EED449DB00707DFA655E73C", 00:18:59.438 "no_auto_visible": false, 00:18:59.438 "nsid": 1, 00:18:59.438 "uuid": "57e3256a-0eed-449d-b007-07dfa655e73c" 00:18:59.438 }, 00:18:59.438 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:59.438 } 00:18:59.438 }, 00:18:59.438 { 00:18:59.438 "method": "nvmf_subsystem_add_listener", 00:18:59.438 "params": { 00:18:59.438 "listen_address": { 00:18:59.438 "adrfam": "IPv4", 00:18:59.438 "traddr": "10.0.0.3", 00:18:59.438 "trsvcid": "4420", 00:18:59.438 "trtype": "TCP" 00:18:59.438 }, 00:18:59.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.438 "secure_channel": true 00:18:59.438 } 00:18:59.438 } 00:18:59.438 ] 00:18:59.438 } 00:18:59.438 ] 00:18:59.438 }' 00:18:59.438 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:59.698 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:59.698 "subsystems": [ 00:18:59.698 { 00:18:59.698 "subsystem": "keyring", 00:18:59.698 "config": [ 00:18:59.698 { 00:18:59.698 "method": "keyring_file_add_key", 00:18:59.698 "params": { 00:18:59.698 "name": "key0", 00:18:59.698 "path": "/tmp/tmp.pyYf7ozTxW" 00:18:59.698 } 00:18:59.698 } 00:18:59.698 ] 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "subsystem": "iobuf", 00:18:59.698 "config": [ 00:18:59.698 { 00:18:59.698 "method": "iobuf_set_options", 00:18:59.698 "params": { 00:18:59.698 "enable_numa": false, 00:18:59.698 "large_bufsize": 135168, 00:18:59.698 "large_pool_count": 1024, 00:18:59.698 "small_bufsize": 8192, 00:18:59.698 "small_pool_count": 8192 00:18:59.698 } 00:18:59.698 } 00:18:59.698 ] 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "subsystem": "sock", 00:18:59.698 "config": [ 00:18:59.698 { 00:18:59.698 "method": "sock_set_default_impl", 00:18:59.698 "params": { 00:18:59.698 "impl_name": "posix" 00:18:59.698 } 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "method": "sock_impl_set_options", 00:18:59.698 "params": { 00:18:59.698 "enable_ktls": false, 00:18:59.698 "enable_placement_id": 0, 00:18:59.698 "enable_quickack": false, 00:18:59.698 "enable_recv_pipe": true, 00:18:59.698 "enable_zerocopy_send_client": false, 00:18:59.698 "enable_zerocopy_send_server": true, 00:18:59.698 "impl_name": "ssl", 00:18:59.698 "recv_buf_size": 4096, 00:18:59.698 "send_buf_size": 4096, 00:18:59.698 "tls_version": 0, 00:18:59.698 "zerocopy_threshold": 0 00:18:59.698 } 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "method": "sock_impl_set_options", 00:18:59.698 "params": { 00:18:59.698 "enable_ktls": false, 00:18:59.698 "enable_placement_id": 0, 00:18:59.698 "enable_quickack": false, 00:18:59.698 "enable_recv_pipe": true, 00:18:59.698 "enable_zerocopy_send_client": false, 00:18:59.698 "enable_zerocopy_send_server": true, 00:18:59.698 "impl_name": "posix", 00:18:59.698 "recv_buf_size": 2097152, 00:18:59.698 "send_buf_size": 2097152, 00:18:59.698 "tls_version": 0, 00:18:59.698 "zerocopy_threshold": 0 00:18:59.698 } 00:18:59.698 } 00:18:59.698 ] 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "subsystem": "vmd", 00:18:59.698 "config": [] 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "subsystem": "accel", 00:18:59.698 "config": [ 00:18:59.698 { 00:18:59.698 "method": "accel_set_options", 00:18:59.698 "params": { 00:18:59.698 "buf_count": 2048, 00:18:59.698 "large_cache_size": 16, 00:18:59.698 "sequence_count": 2048, 00:18:59.698 "small_cache_size": 128, 00:18:59.698 "task_count": 2048 00:18:59.698 } 00:18:59.698 } 00:18:59.698 ] 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "subsystem": "bdev", 00:18:59.698 "config": [ 00:18:59.698 { 00:18:59.698 "method": "bdev_set_options", 00:18:59.698 "params": { 00:18:59.698 "bdev_auto_examine": true, 00:18:59.698 "bdev_io_cache_size": 256, 00:18:59.698 "bdev_io_pool_size": 65535, 00:18:59.698 "iobuf_large_cache_size": 16, 00:18:59.698 "iobuf_small_cache_size": 128 00:18:59.698 } 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "method": "bdev_raid_set_options", 00:18:59.698 "params": { 00:18:59.698 "process_max_bandwidth_mb_sec": 0, 00:18:59.698 "process_window_size_kb": 1024 00:18:59.698 } 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "method": "bdev_iscsi_set_options", 00:18:59.698 "params": { 00:18:59.698 "timeout_sec": 30 00:18:59.698 } 00:18:59.698 }, 00:18:59.698 { 00:18:59.698 "method": "bdev_nvme_set_options", 00:18:59.698 "params": { 00:18:59.698 "action_on_timeout": "none", 00:18:59.698 "allow_accel_sequence": false, 00:18:59.698 "arbitration_burst": 0, 00:18:59.698 "bdev_retry_count": 3, 00:18:59.698 "ctrlr_loss_timeout_sec": 0, 00:18:59.698 "delay_cmd_submit": true, 00:18:59.698 "dhchap_dhgroups": [ 00:18:59.698 "null", 00:18:59.698 "ffdhe2048", 00:18:59.698 "ffdhe3072", 00:18:59.698 "ffdhe4096", 00:18:59.698 "ffdhe6144", 00:18:59.698 "ffdhe8192" 00:18:59.698 ], 00:18:59.698 "dhchap_digests": [ 00:18:59.698 "sha256", 00:18:59.698 "sha384", 00:18:59.698 "sha512" 00:18:59.698 ], 00:18:59.698 "disable_auto_failback": false, 00:18:59.698 "fast_io_fail_timeout_sec": 0, 00:18:59.698 "generate_uuids": false, 00:18:59.698 "high_priority_weight": 0, 00:18:59.698 "io_path_stat": false, 00:18:59.698 "io_queue_requests": 512, 00:18:59.698 "keep_alive_timeout_ms": 10000, 00:18:59.698 "low_priority_weight": 0, 00:18:59.698 "medium_priority_weight": 0, 00:18:59.698 "nvme_adminq_poll_period_us": 10000, 00:18:59.698 "nvme_error_stat": false, 00:18:59.698 "nvme_ioq_poll_period_us": 0, 00:18:59.698 "rdma_cm_event_timeout_ms": 0, 00:18:59.698 "rdma_max_cq_size": 0, 00:18:59.698 "rdma_srq_size": 0, 00:18:59.698 "reconnect_delay_sec": 0, 00:18:59.698 "timeout_admin_us": 0, 00:18:59.698 "timeout_us": 0, 00:18:59.698 "transport_ack_timeout": 0, 00:18:59.699 "transport_retry_count": 4, 00:18:59.699 "transport_tos": 0 00:18:59.699 } 00:18:59.699 }, 00:18:59.699 { 00:18:59.699 "method": "bdev_nvme_attach_controller", 00:18:59.699 "params": { 00:18:59.699 "adrfam": "IPv4", 00:18:59.699 "ctrlr_loss_timeout_sec": 0, 00:18:59.699 "ddgst": false, 00:18:59.699 "fast_io_fail_timeout_sec": 0, 00:18:59.699 "hdgst": false, 00:18:59.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.699 "multipath": "multipath", 00:18:59.699 "name": "TLSTEST", 00:18:59.699 "prchk_guard": false, 00:18:59.699 "prchk_reftag": false, 00:18:59.699 "psk": "key0", 00:18:59.699 "reconnect_delay_sec": 0, 00:18:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.699 "traddr": "10.0.0.3", 00:18:59.699 "trsvcid": "4420", 00:18:59.699 "trtype": "TCP" 00:18:59.699 } 00:18:59.699 }, 00:18:59.699 { 00:18:59.699 "method": "bdev_nvme_set_hotplug", 00:18:59.699 "params": { 00:18:59.699 "enable": false, 00:18:59.699 "period_us": 100000 00:18:59.699 } 00:18:59.699 }, 00:18:59.699 { 00:18:59.699 "method": "bdev_wait_for_examine" 00:18:59.699 } 00:18:59.699 ] 00:18:59.699 }, 00:18:59.699 { 00:18:59.699 "subsystem": "nbd", 00:18:59.699 "config": [] 00:18:59.699 } 00:18:59.699 ] 00:18:59.699 }' 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84145 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84145 ']' 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84145 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84145 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:59.699 killing process with pid 84145 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84145' 00:18:59.699 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.699 00:18:59.699 Latency(us) 00:18:59.699 [2024-11-06T13:48:40.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.699 [2024-11-06T13:48:40.632Z] =================================================================================================================== 00:18:59.699 [2024-11-06T13:48:40.632Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84145 00:18:59.699 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84145 00:18:59.958 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84041 00:18:59.958 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84041 ']' 00:18:59.958 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84041 00:18:59.958 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:00.217 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:00.217 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84041 00:19:00.217 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:00.217 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:00.218 killing process with pid 84041 00:19:00.218 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84041' 00:19:00.218 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84041 00:19:00.218 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84041 00:19:00.477 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:00.477 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.477 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.477 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.477 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:00.477 "subsystems": [ 00:19:00.477 { 00:19:00.477 "subsystem": "keyring", 00:19:00.477 "config": [ 00:19:00.477 { 00:19:00.477 "method": "keyring_file_add_key", 00:19:00.477 "params": { 00:19:00.477 "name": "key0", 00:19:00.477 "path": "/tmp/tmp.pyYf7ozTxW" 00:19:00.477 } 00:19:00.477 } 00:19:00.477 ] 00:19:00.477 }, 00:19:00.477 { 00:19:00.477 "subsystem": "iobuf", 00:19:00.477 "config": [ 00:19:00.477 { 00:19:00.477 "method": "iobuf_set_options", 00:19:00.477 "params": { 00:19:00.477 "enable_numa": false, 00:19:00.477 "large_bufsize": 135168, 00:19:00.477 "large_pool_count": 1024, 00:19:00.477 "small_bufsize": 8192, 00:19:00.477 "small_pool_count": 8192 00:19:00.477 } 00:19:00.477 } 00:19:00.477 ] 00:19:00.477 }, 00:19:00.477 { 00:19:00.477 "subsystem": "sock", 00:19:00.477 "config": [ 00:19:00.477 { 00:19:00.477 "method": "sock_set_default_impl", 00:19:00.477 "params": { 00:19:00.477 "impl_name": "posix" 00:19:00.477 } 00:19:00.477 }, 00:19:00.477 { 00:19:00.477 "method": "sock_impl_set_options", 00:19:00.477 "params": { 00:19:00.477 "enable_ktls": false, 00:19:00.477 "enable_placement_id": 0, 00:19:00.477 "enable_quickack": false, 00:19:00.477 "enable_recv_pipe": true, 00:19:00.477 "enable_zerocopy_send_client": false, 00:19:00.477 "enable_zerocopy_send_server": true, 00:19:00.477 "impl_name": "ssl", 00:19:00.477 "recv_buf_size": 4096, 00:19:00.477 "send_buf_size": 4096, 00:19:00.477 "tls_version": 0, 00:19:00.477 "zerocopy_threshold": 0 00:19:00.477 } 00:19:00.477 }, 00:19:00.477 { 00:19:00.477 "method": "sock_impl_set_options", 00:19:00.477 "params": { 00:19:00.477 "enable_ktls": false, 00:19:00.477 "enable_placement_id": 0, 00:19:00.477 "enable_quickack": false, 00:19:00.477 "enable_recv_pipe": true, 00:19:00.477 "enable_zerocopy_send_client": false, 00:19:00.477 "enable_zerocopy_send_server": true, 00:19:00.477 "impl_name": "posix", 00:19:00.477 "recv_buf_size": 2097152, 00:19:00.477 "send_buf_size": 2097152, 00:19:00.477 "tls_version": 0, 00:19:00.477 "zerocopy_threshold": 0 00:19:00.477 } 00:19:00.477 } 00:19:00.477 ] 00:19:00.477 }, 00:19:00.477 { 00:19:00.477 "subsystem": "vmd", 00:19:00.477 "config": [] 00:19:00.477 }, 00:19:00.477 { 00:19:00.477 "subsystem": "accel", 00:19:00.477 "config": [ 00:19:00.477 { 00:19:00.478 "method": "accel_set_options", 00:19:00.478 "params": { 00:19:00.478 "buf_count": 2048, 00:19:00.478 "large_cache_size": 16, 00:19:00.478 "sequence_count": 2048, 00:19:00.478 "small_cache_size": 128, 00:19:00.478 "task_count": 2048 00:19:00.478 } 00:19:00.478 } 00:19:00.478 ] 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "subsystem": "bdev", 00:19:00.478 "config": [ 00:19:00.478 { 00:19:00.478 "method": "bdev_set_options", 00:19:00.478 "params": { 00:19:00.478 "bdev_auto_examine": true, 00:19:00.478 "bdev_io_cache_size": 256, 00:19:00.478 "bdev_io_pool_size": 65535, 00:19:00.478 "iobuf_large_cache_size": 16, 00:19:00.478 "iobuf_small_cache_size": 128 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "bdev_raid_set_options", 00:19:00.478 "params": { 00:19:00.478 "process_max_bandwidth_mb_sec": 0, 00:19:00.478 "process_window_size_kb": 1024 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "bdev_iscsi_set_options", 00:19:00.478 "params": { 00:19:00.478 "timeout_sec": 30 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "bdev_nvme_set_options", 00:19:00.478 "params": { 00:19:00.478 "action_on_timeout": "none", 00:19:00.478 "allow_accel_sequence": false, 00:19:00.478 "arbitration_burst": 0, 00:19:00.478 "bdev_retry_count": 3, 00:19:00.478 "ctrlr_loss_timeout_sec": 0, 00:19:00.478 "delay_cmd_submit": true, 00:19:00.478 "dhchap_dhgroups": [ 00:19:00.478 "null", 00:19:00.478 "ffdhe2048", 00:19:00.478 "ffdhe3072", 00:19:00.478 "ffdhe4096", 00:19:00.478 "ffdhe6144", 00:19:00.478 "ffdhe8192" 00:19:00.478 ], 00:19:00.478 "dhchap_digests": [ 00:19:00.478 "sha256", 00:19:00.478 "sha384", 00:19:00.478 "sha512" 00:19:00.478 ], 00:19:00.478 "disable_auto_failback": false, 00:19:00.478 "fast_io_fail_timeout_sec": 0, 00:19:00.478 "generate_uuids": false, 00:19:00.478 "high_priority_weight": 0, 00:19:00.478 "io_path_stat": false, 00:19:00.478 "io_queue_requests": 0, 00:19:00.478 "keep_alive_timeout_ms": 10000, 00:19:00.478 "low_priority_weight": 0, 00:19:00.478 "medium_priority_weight": 0, 00:19:00.478 "nvme_adminq_poll_period_us": 10000, 00:19:00.478 "nvme_error_stat": false, 00:19:00.478 "nvme_ioq_poll_period_us": 0, 00:19:00.478 "rdma_cm_event_timeout_ms": 0, 00:19:00.478 "rdma_max_cq_size": 0, 00:19:00.478 "rdma_srq_size": 0, 00:19:00.478 "reconnect_delay_sec": 0, 00:19:00.478 "timeout_admin_us": 0, 00:19:00.478 "timeout_us": 0, 00:19:00.478 "transport_ack_timeout": 0, 00:19:00.478 "transport_retry_count": 4, 00:19:00.478 "transport_tos": 0 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "bdev_nvme_set_hotplug", 00:19:00.478 "params": { 00:19:00.478 "enable": false, 00:19:00.478 "period_us": 100000 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "bdev_malloc_create", 00:19:00.478 "params": { 00:19:00.478 "block_size": 4096, 00:19:00.478 "dif_is_head_of_md": false, 00:19:00.478 "dif_pi_format": 0, 00:19:00.478 "dif_type": 0, 00:19:00.478 "md_size": 0, 00:19:00.478 "name": "malloc0", 00:19:00.478 "num_blocks": 8192, 00:19:00.478 "optimal_io_boundary": 0, 00:19:00.478 "physical_block_size": 4096, 00:19:00.478 "uuid": "57e3256a-0eed-449d-b007-07dfa655e73c" 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "bdev_wait_for_examine" 00:19:00.478 } 00:19:00.478 ] 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "subsystem": "nbd", 00:19:00.478 "config": [] 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "subsystem": "scheduler", 00:19:00.478 "config": [ 00:19:00.478 { 00:19:00.478 "method": "framework_set_scheduler", 00:19:00.478 "params": { 00:19:00.478 "name": "static" 00:19:00.478 } 00:19:00.478 } 00:19:00.478 ] 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "subsystem": "nvmf", 00:19:00.478 "config": [ 00:19:00.478 { 00:19:00.478 "method": "nvmf_set_config", 00:19:00.478 "params": { 00:19:00.478 "admin_cmd_passthru": { 00:19:00.478 "identify_ctrlr": false 00:19:00.478 }, 00:19:00.478 "dhchap_dhgroups": [ 00:19:00.478 "null", 00:19:00.478 "ffdhe2048", 00:19:00.478 "ffdhe3072", 00:19:00.478 "ffdhe4096", 00:19:00.478 "ffdhe6144", 00:19:00.478 "ffdhe8192" 00:19:00.478 ], 00:19:00.478 "dhchap_digests": [ 00:19:00.478 "sha256", 00:19:00.478 "sha384", 00:19:00.478 "sha512" 00:19:00.478 ], 00:19:00.478 "discovery_filter": "match_any" 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "nvmf_set_max_subsystems", 00:19:00.478 "params": { 00:19:00.478 "max_subsystems": 1024 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "nvmf_set_crdt", 00:19:00.478 "params": { 00:19:00.478 "crdt1": 0, 00:19:00.478 "crdt2": 0, 00:19:00.478 "crdt3": 0 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "nvmf_create_transport", 00:19:00.478 "params": { 00:19:00.478 "abort_timeout_sec": 1, 00:19:00.478 "ack_timeout": 0, 00:19:00.478 "buf_cache_size": 4294967295, 00:19:00.478 "c2h_success": false, 00:19:00.478 "data_wr_pool_size": 0, 00:19:00.478 "dif_insert_or_strip": false, 00:19:00.478 "in_capsule_data_size": 4096, 00:19:00.478 "io_unit_size": 131072, 00:19:00.478 "max_aq_depth": 128, 00:19:00.478 "max_io_qpairs_per_ctrlr": 127, 00:19:00.478 "max_io_size": 131072, 00:19:00.478 "max_queue_depth": 128, 00:19:00.478 "num_shared_buffers": 511, 00:19:00.478 "sock_priority": 0, 00:19:00.478 "trtype": "TCP", 00:19:00.478 "zcopy": false 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "nvmf_create_subsystem", 00:19:00.478 "params": { 00:19:00.478 "allow_any_host": false, 00:19:00.478 "ana_reporting": false, 00:19:00.478 "max_cntlid": 65519, 00:19:00.478 "max_namespaces": 10, 00:19:00.478 "min_cntlid": 1, 00:19:00.478 "model_number": "SPDK bdev Controller", 00:19:00.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.478 "serial_number": "SPDK00000000000001" 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "nvmf_subsystem_add_host", 00:19:00.478 "params": { 00:19:00.478 "host": "nqn.2016-06.io.spdk:host1", 00:19:00.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.478 "psk": "key0" 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "nvmf_subsystem_add_ns", 00:19:00.478 "params": { 00:19:00.478 "namespace": { 00:19:00.478 "bdev_name": "malloc0", 00:19:00.478 "nguid": "57E3256A0EED449DB00707DFA655E73C", 00:19:00.478 "no_auto_visible": false, 00:19:00.478 "nsid": 1, 00:19:00.478 "uuid": "57e3256a-0eed-449d-b007-07dfa655e73c" 00:19:00.478 }, 00:19:00.478 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:00.478 } 00:19:00.478 }, 00:19:00.478 { 00:19:00.478 "method": "nvmf_subsystem_add_listener", 00:19:00.478 "params": { 00:19:00.478 "listen_address": { 00:19:00.478 "adrfam": "IPv4", 00:19:00.478 "traddr": "10.0.0.3", 00:19:00.478 "trsvcid": "4420", 00:19:00.478 "trtype": "TCP" 00:19:00.478 }, 00:19:00.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.478 "secure_channel": true 00:19:00.478 } 00:19:00.478 } 00:19:00.478 ] 00:19:00.478 } 00:19:00.478 ] 00:19:00.478 }' 00:19:00.478 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84235 00:19:00.478 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:00.478 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84235 00:19:00.478 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84235 ']' 00:19:00.478 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.479 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:00.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.479 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.479 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:00.479 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.479 [2024-11-06 13:48:41.274945] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:00.479 [2024-11-06 13:48:41.275037] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.738 [2024-11-06 13:48:41.425650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.738 [2024-11-06 13:48:41.482224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.738 [2024-11-06 13:48:41.482276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.738 [2024-11-06 13:48:41.482286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.738 [2024-11-06 13:48:41.482294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.738 [2024-11-06 13:48:41.482301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.738 [2024-11-06 13:48:41.482748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.997 [2024-11-06 13:48:41.753871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.997 [2024-11-06 13:48:41.785762] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.997 [2024-11-06 13:48:41.786015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:01.256 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:01.256 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:01.256 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.256 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.256 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84277 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84277 /var/tmp/bdevperf.sock 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84277 ']' 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.516 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:01.516 "subsystems": [ 00:19:01.516 { 00:19:01.516 "subsystem": "keyring", 00:19:01.516 "config": [ 00:19:01.516 { 00:19:01.516 "method": "keyring_file_add_key", 00:19:01.516 "params": { 00:19:01.516 "name": "key0", 00:19:01.516 "path": "/tmp/tmp.pyYf7ozTxW" 00:19:01.516 } 00:19:01.516 } 00:19:01.516 ] 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "subsystem": "iobuf", 00:19:01.516 "config": [ 00:19:01.516 { 00:19:01.516 "method": "iobuf_set_options", 00:19:01.516 "params": { 00:19:01.516 "enable_numa": false, 00:19:01.516 "large_bufsize": 135168, 00:19:01.516 "large_pool_count": 1024, 00:19:01.516 "small_bufsize": 8192, 00:19:01.516 "small_pool_count": 8192 00:19:01.516 } 00:19:01.516 } 00:19:01.516 ] 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "subsystem": "sock", 00:19:01.516 "config": [ 00:19:01.516 { 00:19:01.516 "method": "sock_set_default_impl", 00:19:01.516 "params": { 00:19:01.516 "impl_name": "posix" 00:19:01.516 } 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "method": "sock_impl_set_options", 00:19:01.516 "params": { 00:19:01.516 "enable_ktls": false, 00:19:01.516 "enable_placement_id": 0, 00:19:01.516 "enable_quickack": false, 00:19:01.516 "enable_recv_pipe": true, 00:19:01.516 "enable_zerocopy_send_client": false, 00:19:01.516 "enable_zerocopy_send_server": true, 00:19:01.516 "impl_name": "ssl", 00:19:01.516 "recv_buf_size": 4096, 00:19:01.516 "send_buf_size": 4096, 00:19:01.516 "tls_version": 0, 00:19:01.516 "zerocopy_threshold": 0 00:19:01.516 } 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "method": "sock_impl_set_options", 00:19:01.516 "params": { 00:19:01.516 "enable_ktls": false, 00:19:01.516 "enable_placement_id": 0, 00:19:01.516 "enable_quickack": false, 00:19:01.516 "enable_recv_pipe": true, 00:19:01.516 "enable_zerocopy_send_client": false, 00:19:01.516 "enable_zerocopy_send_server": true, 00:19:01.516 "impl_name": "posix", 00:19:01.516 "recv_buf_size": 2097152, 00:19:01.516 "send_buf_size": 2097152, 00:19:01.516 "tls_version": 0, 00:19:01.516 "zerocopy_threshold": 0 00:19:01.516 } 00:19:01.516 } 00:19:01.516 ] 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "subsystem": "vmd", 00:19:01.516 "config": [] 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "subsystem": "accel", 00:19:01.516 "config": [ 00:19:01.516 { 00:19:01.516 "method": "accel_set_options", 00:19:01.516 "params": { 00:19:01.516 "buf_count": 2048, 00:19:01.516 "large_cache_size": 16, 00:19:01.516 "sequence_count": 2048, 00:19:01.516 "small_cache_size": 128, 00:19:01.516 "task_count": 2048 00:19:01.516 } 00:19:01.516 } 00:19:01.516 ] 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "subsystem": "bdev", 00:19:01.516 "config": [ 00:19:01.516 { 00:19:01.516 "method": "bdev_set_options", 00:19:01.516 "params": { 00:19:01.516 "bdev_auto_examine": true, 00:19:01.516 "bdev_io_cache_size": 256, 00:19:01.516 "bdev_io_pool_size": 65535, 00:19:01.516 "iobuf_large_cache_size": 16, 00:19:01.516 "iobuf_small_cache_size": 128 00:19:01.516 } 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "method": "bdev_raid_set_options", 00:19:01.516 "params": { 00:19:01.516 "process_max_bandwidth_mb_sec": 0, 00:19:01.516 "process_window_size_kb": 1024 00:19:01.516 } 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "method": "bdev_iscsi_set_options", 00:19:01.516 "params": { 00:19:01.516 "timeout_sec": 30 00:19:01.516 } 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "method": "bdev_nvme_set_options", 00:19:01.516 "params": { 00:19:01.516 "action_on_timeout": "none", 00:19:01.516 "allow_accel_sequence": false, 00:19:01.516 "arbitration_burst": 0, 00:19:01.516 "bdev_retry_count": 3, 00:19:01.516 "ctrlr_loss_timeout_sec": 0, 00:19:01.516 "delay_cmd_submit": true, 00:19:01.516 "dhchap_dhgroups": [ 00:19:01.516 "null", 00:19:01.516 "ffdhe2048", 00:19:01.516 "ffdhe3072", 00:19:01.516 "ffdhe4096", 00:19:01.516 "ffdhe6144", 00:19:01.516 "ffdhe8192" 00:19:01.516 ], 00:19:01.516 "dhchap_digests": [ 00:19:01.516 "sha256", 00:19:01.516 "sha384", 00:19:01.516 "sha512" 00:19:01.516 ], 00:19:01.516 "disable_auto_failback": false, 00:19:01.516 "fast_io_fail_timeout_sec": 0, 00:19:01.516 "generate_uuids": false, 00:19:01.516 "high_priority_weight": 0, 00:19:01.516 "io_path_stat": false, 00:19:01.516 "io_queue_requests": 512, 00:19:01.516 "keep_alive_timeout_ms": 10000, 00:19:01.516 "low_priority_weight": 0, 00:19:01.516 "medium_priority_weight": 0, 00:19:01.516 "nvme_adminq_poll_period_us": 10000, 00:19:01.516 "nvme_error_stat": false, 00:19:01.516 "nvme_ioq_poll_period_us": 0, 00:19:01.516 "rdma_cm_event_timeout_ms": 0, 00:19:01.516 "rdma_max_cq_size": 0, 00:19:01.516 "rdma_srq_size": 0, 00:19:01.516 "reconnect_delay_sec": 0, 00:19:01.516 "timeout_admin_us": 0, 00:19:01.516 "timeout_us": 0, 00:19:01.516 "transport_ack_timeout": 0, 00:19:01.516 "transport_retry_count": 4, 00:19:01.516 "transport_tos": 0 00:19:01.516 } 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "method": "bdev_nvme_attach_controller", 00:19:01.516 "params": { 00:19:01.516 "adrfam": "IPv4", 00:19:01.516 "ctrlr_loss_timeout_sec": 0, 00:19:01.516 "ddgst": false, 00:19:01.516 "fast_io_fail_timeout_sec": 0, 00:19:01.516 "hdgst": false, 00:19:01.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.516 "multipath": "multipath", 00:19:01.516 "name": "TLSTEST", 00:19:01.516 "prchk_guard": false, 00:19:01.516 "prchk_reftag": false, 00:19:01.516 "psk": "key0", 00:19:01.516 "reconnect_delay_sec": 0, 00:19:01.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.516 "traddr": "10.0.0.3", 00:19:01.516 "trsvcid": "4420", 00:19:01.516 "trtype": "TCP" 00:19:01.516 } 00:19:01.516 }, 00:19:01.516 { 00:19:01.516 "method": "bdev_nvme_set_hotplug", 00:19:01.516 "params": { 00:19:01.516 "enable": false, 00:19:01.516 "period_us": 100000 00:19:01.517 } 00:19:01.517 }, 00:19:01.517 { 00:19:01.517 "method": "bdev_wait_for_examine" 00:19:01.517 } 00:19:01.517 ] 00:19:01.517 }, 00:19:01.517 { 00:19:01.517 "subsystem": "nbd", 00:19:01.517 "config": [] 00:19:01.517 } 00:19:01.517 ] 00:19:01.517 }' 00:19:01.517 [2024-11-06 13:48:42.269811] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:01.517 [2024-11-06 13:48:42.269900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84277 ] 00:19:01.517 [2024-11-06 13:48:42.413858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.776 [2024-11-06 13:48:42.482340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.776 [2024-11-06 13:48:42.690807] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.345 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:02.345 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:02.345 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:02.345 Running I/O for 10 seconds... 00:19:04.663 5985.00 IOPS, 23.38 MiB/s [2024-11-06T13:48:46.537Z] 5993.00 IOPS, 23.41 MiB/s [2024-11-06T13:48:47.474Z] 5991.33 IOPS, 23.40 MiB/s [2024-11-06T13:48:48.411Z] 5991.75 IOPS, 23.41 MiB/s [2024-11-06T13:48:49.349Z] 5986.40 IOPS, 23.38 MiB/s [2024-11-06T13:48:50.286Z] 5981.00 IOPS, 23.36 MiB/s [2024-11-06T13:48:51.665Z] 5971.57 IOPS, 23.33 MiB/s [2024-11-06T13:48:52.619Z] 5968.00 IOPS, 23.31 MiB/s [2024-11-06T13:48:53.600Z] 5966.56 IOPS, 23.31 MiB/s [2024-11-06T13:48:53.601Z] 5962.70 IOPS, 23.29 MiB/s 00:19:12.668 Latency(us) 00:19:12.668 [2024-11-06T13:48:53.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.668 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:12.668 Verification LBA range: start 0x0 length 0x2000 00:19:12.668 TLSTESTn1 : 10.01 5968.45 23.31 0.00 0.00 21413.57 4211.15 15897.09 00:19:12.668 [2024-11-06T13:48:53.601Z] =================================================================================================================== 00:19:12.668 [2024-11-06T13:48:53.601Z] Total : 5968.45 23.31 0.00 0.00 21413.57 4211.15 15897.09 00:19:12.668 { 00:19:12.668 "results": [ 00:19:12.668 { 00:19:12.668 "job": "TLSTESTn1", 00:19:12.668 "core_mask": "0x4", 00:19:12.668 "workload": "verify", 00:19:12.668 "status": "finished", 00:19:12.668 "verify_range": { 00:19:12.668 "start": 0, 00:19:12.668 "length": 8192 00:19:12.668 }, 00:19:12.668 "queue_depth": 128, 00:19:12.668 "io_size": 4096, 00:19:12.668 "runtime": 10.011813, 00:19:12.668 "iops": 5968.449470640333, 00:19:12.668 "mibps": 23.3142557446888, 00:19:12.668 "io_failed": 0, 00:19:12.668 "io_timeout": 0, 00:19:12.668 "avg_latency_us": 21413.57189847836, 00:19:12.668 "min_latency_us": 4211.14859437751, 00:19:12.668 "max_latency_us": 15897.0859437751 00:19:12.668 } 00:19:12.668 ], 00:19:12.668 "core_count": 1 00:19:12.668 } 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84277 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84277 ']' 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84277 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84277 00:19:12.668 killing process with pid 84277 00:19:12.668 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.668 00:19:12.668 Latency(us) 00:19:12.668 [2024-11-06T13:48:53.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.668 [2024-11-06T13:48:53.601Z] =================================================================================================================== 00:19:12.668 [2024-11-06T13:48:53.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84277' 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84277 00:19:12.668 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84277 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84235 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84235 ']' 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84235 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84235 00:19:12.927 killing process with pid 84235 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84235' 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84235 00:19:12.927 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84235 00:19:13.186 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:13.186 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84433 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84433 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84433 ']' 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:13.187 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.187 [2024-11-06 13:48:53.996621] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:13.187 [2024-11-06 13:48:53.996885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.446 [2024-11-06 13:48:54.131941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.446 [2024-11-06 13:48:54.198675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.446 [2024-11-06 13:48:54.198725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.446 [2024-11-06 13:48:54.198735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.446 [2024-11-06 13:48:54.198744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.446 [2024-11-06 13:48:54.198751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.446 [2024-11-06 13:48:54.199169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.013 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:14.013 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:14.013 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:14.013 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:14.013 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.273 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.273 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.pyYf7ozTxW 00:19:14.273 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pyYf7ozTxW 00:19:14.273 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:14.273 [2024-11-06 13:48:55.165340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.273 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:14.532 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:14.791 [2024-11-06 13:48:55.592748] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.791 [2024-11-06 13:48:55.593017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:14.791 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:15.051 malloc0 00:19:15.051 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:15.310 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84537 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84537 /var/tmp/bdevperf.sock 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84537 ']' 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:15.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:15.570 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.829 [2024-11-06 13:48:56.517982] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:15.829 [2024-11-06 13:48:56.518058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84537 ] 00:19:15.829 [2024-11-06 13:48:56.669089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.829 [2024-11-06 13:48:56.731872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.770 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:16.770 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:16.770 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:19:16.770 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:17.029 [2024-11-06 13:48:57.837086] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.029 nvme0n1 00:19:17.029 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:17.289 Running I/O for 1 seconds... 00:19:18.227 5925.00 IOPS, 23.14 MiB/s 00:19:18.227 Latency(us) 00:19:18.227 [2024-11-06T13:48:59.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.227 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:18.227 Verification LBA range: start 0x0 length 0x2000 00:19:18.227 nvme0n1 : 1.01 5986.55 23.38 0.00 0.00 21241.18 3974.27 18634.33 00:19:18.227 [2024-11-06T13:48:59.160Z] =================================================================================================================== 00:19:18.227 [2024-11-06T13:48:59.160Z] Total : 5986.55 23.38 0.00 0.00 21241.18 3974.27 18634.33 00:19:18.227 { 00:19:18.227 "results": [ 00:19:18.227 { 00:19:18.227 "job": "nvme0n1", 00:19:18.227 "core_mask": "0x2", 00:19:18.227 "workload": "verify", 00:19:18.227 "status": "finished", 00:19:18.227 "verify_range": { 00:19:18.227 "start": 0, 00:19:18.227 "length": 8192 00:19:18.227 }, 00:19:18.227 "queue_depth": 128, 00:19:18.227 "io_size": 4096, 00:19:18.227 "runtime": 1.0111, 00:19:18.227 "iops": 5986.5493027395905, 00:19:18.227 "mibps": 23.384958213826526, 00:19:18.227 "io_failed": 0, 00:19:18.227 "io_timeout": 0, 00:19:18.227 "avg_latency_us": 21241.178653089144, 00:19:18.227 "min_latency_us": 3974.271485943775, 00:19:18.227 "max_latency_us": 18634.332530120482 00:19:18.227 } 00:19:18.227 ], 00:19:18.227 "core_count": 1 00:19:18.227 } 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84537 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84537 ']' 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84537 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84537 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:18.227 killing process with pid 84537 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84537' 00:19:18.227 Received shutdown signal, test time was about 1.000000 seconds 00:19:18.227 00:19:18.227 Latency(us) 00:19:18.227 [2024-11-06T13:48:59.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.227 [2024-11-06T13:48:59.160Z] =================================================================================================================== 00:19:18.227 [2024-11-06T13:48:59.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84537 00:19:18.227 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84537 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84433 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84433 ']' 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84433 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84433 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:18.505 killing process with pid 84433 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84433' 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84433 00:19:18.505 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84433 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84621 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84621 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84621 ']' 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.777 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.036 [2024-11-06 13:48:59.752314] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:19.036 [2024-11-06 13:48:59.752409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.036 [2024-11-06 13:48:59.905379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.036 [2024-11-06 13:48:59.959834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.036 [2024-11-06 13:48:59.959898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.036 [2024-11-06 13:48:59.959908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.036 [2024-11-06 13:48:59.959917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.036 [2024-11-06 13:48:59.959924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.036 [2024-11-06 13:48:59.960281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.971 [2024-11-06 13:49:00.702888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.971 malloc0 00:19:19.971 [2024-11-06 13:49:00.737316] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.971 [2024-11-06 13:49:00.737545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84671 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84671 /var/tmp/bdevperf.sock 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84671 ']' 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.971 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.971 [2024-11-06 13:49:00.821701] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:19.972 [2024-11-06 13:49:00.821767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84671 ] 00:19:20.230 [2024-11-06 13:49:00.971478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.230 [2024-11-06 13:49:01.031010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.797 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.797 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:20.797 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pyYf7ozTxW 00:19:21.057 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:21.316 [2024-11-06 13:49:02.108002] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.316 nvme0n1 00:19:21.316 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:21.575 Running I/O for 1 seconds... 00:19:22.511 5941.00 IOPS, 23.21 MiB/s 00:19:22.511 Latency(us) 00:19:22.511 [2024-11-06T13:49:03.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.511 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:22.511 Verification LBA range: start 0x0 length 0x2000 00:19:22.511 nvme0n1 : 1.01 5999.08 23.43 0.00 0.00 21190.68 4421.71 16528.76 00:19:22.511 [2024-11-06T13:49:03.444Z] =================================================================================================================== 00:19:22.511 [2024-11-06T13:49:03.444Z] Total : 5999.08 23.43 0.00 0.00 21190.68 4421.71 16528.76 00:19:22.511 { 00:19:22.511 "results": [ 00:19:22.511 { 00:19:22.511 "job": "nvme0n1", 00:19:22.511 "core_mask": "0x2", 00:19:22.511 "workload": "verify", 00:19:22.511 "status": "finished", 00:19:22.511 "verify_range": { 00:19:22.511 "start": 0, 00:19:22.511 "length": 8192 00:19:22.511 }, 00:19:22.511 "queue_depth": 128, 00:19:22.511 "io_size": 4096, 00:19:22.511 "runtime": 1.011822, 00:19:22.511 "iops": 5999.078889369869, 00:19:22.512 "mibps": 23.43390191160105, 00:19:22.512 "io_failed": 0, 00:19:22.512 "io_timeout": 0, 00:19:22.512 "avg_latency_us": 21190.675769569218, 00:19:22.512 "min_latency_us": 4421.706024096386, 00:19:22.512 "max_latency_us": 16528.758232931727 00:19:22.512 } 00:19:22.512 ], 00:19:22.512 "core_count": 1 00:19:22.512 } 00:19:22.512 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:22.512 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.512 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.771 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.771 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:22.771 "subsystems": [ 00:19:22.771 { 00:19:22.771 "subsystem": "keyring", 00:19:22.771 "config": [ 00:19:22.771 { 00:19:22.771 "method": "keyring_file_add_key", 00:19:22.771 "params": { 00:19:22.771 "name": "key0", 00:19:22.771 "path": "/tmp/tmp.pyYf7ozTxW" 00:19:22.771 } 00:19:22.771 } 00:19:22.771 ] 00:19:22.771 }, 00:19:22.771 { 00:19:22.771 "subsystem": "iobuf", 00:19:22.771 "config": [ 00:19:22.771 { 00:19:22.771 "method": "iobuf_set_options", 00:19:22.771 "params": { 00:19:22.771 "enable_numa": false, 00:19:22.771 "large_bufsize": 135168, 00:19:22.771 "large_pool_count": 1024, 00:19:22.771 "small_bufsize": 8192, 00:19:22.771 "small_pool_count": 8192 00:19:22.771 } 00:19:22.771 } 00:19:22.771 ] 00:19:22.771 }, 00:19:22.771 { 00:19:22.771 "subsystem": "sock", 00:19:22.771 "config": [ 00:19:22.771 { 00:19:22.771 "method": "sock_set_default_impl", 00:19:22.771 "params": { 00:19:22.771 "impl_name": "posix" 00:19:22.771 } 00:19:22.771 }, 00:19:22.771 { 00:19:22.771 "method": "sock_impl_set_options", 00:19:22.771 "params": { 00:19:22.772 "enable_ktls": false, 00:19:22.772 "enable_placement_id": 0, 00:19:22.772 "enable_quickack": false, 00:19:22.772 "enable_recv_pipe": true, 00:19:22.772 "enable_zerocopy_send_client": false, 00:19:22.772 "enable_zerocopy_send_server": true, 00:19:22.772 "impl_name": "ssl", 00:19:22.772 "recv_buf_size": 4096, 00:19:22.772 "send_buf_size": 4096, 00:19:22.772 "tls_version": 0, 00:19:22.772 "zerocopy_threshold": 0 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "sock_impl_set_options", 00:19:22.772 "params": { 00:19:22.772 "enable_ktls": false, 00:19:22.772 "enable_placement_id": 0, 00:19:22.772 "enable_quickack": false, 00:19:22.772 "enable_recv_pipe": true, 00:19:22.772 "enable_zerocopy_send_client": false, 00:19:22.772 "enable_zerocopy_send_server": true, 00:19:22.772 "impl_name": "posix", 00:19:22.772 "recv_buf_size": 2097152, 00:19:22.772 "send_buf_size": 2097152, 00:19:22.772 "tls_version": 0, 00:19:22.772 "zerocopy_threshold": 0 00:19:22.772 } 00:19:22.772 } 00:19:22.772 ] 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "subsystem": "vmd", 00:19:22.772 "config": [] 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "subsystem": "accel", 00:19:22.772 "config": [ 00:19:22.772 { 00:19:22.772 "method": "accel_set_options", 00:19:22.772 "params": { 00:19:22.772 "buf_count": 2048, 00:19:22.772 "large_cache_size": 16, 00:19:22.772 "sequence_count": 2048, 00:19:22.772 "small_cache_size": 128, 00:19:22.772 "task_count": 2048 00:19:22.772 } 00:19:22.772 } 00:19:22.772 ] 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "subsystem": "bdev", 00:19:22.772 "config": [ 00:19:22.772 { 00:19:22.772 "method": "bdev_set_options", 00:19:22.772 "params": { 00:19:22.772 "bdev_auto_examine": true, 00:19:22.772 "bdev_io_cache_size": 256, 00:19:22.772 "bdev_io_pool_size": 65535, 00:19:22.772 "iobuf_large_cache_size": 16, 00:19:22.772 "iobuf_small_cache_size": 128 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "bdev_raid_set_options", 00:19:22.772 "params": { 00:19:22.772 "process_max_bandwidth_mb_sec": 0, 00:19:22.772 "process_window_size_kb": 1024 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "bdev_iscsi_set_options", 00:19:22.772 "params": { 00:19:22.772 "timeout_sec": 30 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "bdev_nvme_set_options", 00:19:22.772 "params": { 00:19:22.772 "action_on_timeout": "none", 00:19:22.772 "allow_accel_sequence": false, 00:19:22.772 "arbitration_burst": 0, 00:19:22.772 "bdev_retry_count": 3, 00:19:22.772 "ctrlr_loss_timeout_sec": 0, 00:19:22.772 "delay_cmd_submit": true, 00:19:22.772 "dhchap_dhgroups": [ 00:19:22.772 "null", 00:19:22.772 "ffdhe2048", 00:19:22.772 "ffdhe3072", 00:19:22.772 "ffdhe4096", 00:19:22.772 "ffdhe6144", 00:19:22.772 "ffdhe8192" 00:19:22.772 ], 00:19:22.772 "dhchap_digests": [ 00:19:22.772 "sha256", 00:19:22.772 "sha384", 00:19:22.772 "sha512" 00:19:22.772 ], 00:19:22.772 "disable_auto_failback": false, 00:19:22.772 "fast_io_fail_timeout_sec": 0, 00:19:22.772 "generate_uuids": false, 00:19:22.772 "high_priority_weight": 0, 00:19:22.772 "io_path_stat": false, 00:19:22.772 "io_queue_requests": 0, 00:19:22.772 "keep_alive_timeout_ms": 10000, 00:19:22.772 "low_priority_weight": 0, 00:19:22.772 "medium_priority_weight": 0, 00:19:22.772 "nvme_adminq_poll_period_us": 10000, 00:19:22.772 "nvme_error_stat": false, 00:19:22.772 "nvme_ioq_poll_period_us": 0, 00:19:22.772 "rdma_cm_event_timeout_ms": 0, 00:19:22.772 "rdma_max_cq_size": 0, 00:19:22.772 "rdma_srq_size": 0, 00:19:22.772 "reconnect_delay_sec": 0, 00:19:22.772 "timeout_admin_us": 0, 00:19:22.772 "timeout_us": 0, 00:19:22.772 "transport_ack_timeout": 0, 00:19:22.772 "transport_retry_count": 4, 00:19:22.772 "transport_tos": 0 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "bdev_nvme_set_hotplug", 00:19:22.772 "params": { 00:19:22.772 "enable": false, 00:19:22.772 "period_us": 100000 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "bdev_malloc_create", 00:19:22.772 "params": { 00:19:22.772 "block_size": 4096, 00:19:22.772 "dif_is_head_of_md": false, 00:19:22.772 "dif_pi_format": 0, 00:19:22.772 "dif_type": 0, 00:19:22.772 "md_size": 0, 00:19:22.772 "name": "malloc0", 00:19:22.772 "num_blocks": 8192, 00:19:22.772 "optimal_io_boundary": 0, 00:19:22.772 "physical_block_size": 4096, 00:19:22.772 "uuid": "aba5fc86-e8ba-466c-8af9-9b3f3a29579a" 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "bdev_wait_for_examine" 00:19:22.772 } 00:19:22.772 ] 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "subsystem": "nbd", 00:19:22.772 "config": [] 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "subsystem": "scheduler", 00:19:22.772 "config": [ 00:19:22.772 { 00:19:22.772 "method": "framework_set_scheduler", 00:19:22.772 "params": { 00:19:22.772 "name": "static" 00:19:22.772 } 00:19:22.772 } 00:19:22.772 ] 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "subsystem": "nvmf", 00:19:22.772 "config": [ 00:19:22.772 { 00:19:22.772 "method": "nvmf_set_config", 00:19:22.772 "params": { 00:19:22.772 "admin_cmd_passthru": { 00:19:22.772 "identify_ctrlr": false 00:19:22.772 }, 00:19:22.772 "dhchap_dhgroups": [ 00:19:22.772 "null", 00:19:22.772 "ffdhe2048", 00:19:22.772 "ffdhe3072", 00:19:22.772 "ffdhe4096", 00:19:22.772 "ffdhe6144", 00:19:22.772 "ffdhe8192" 00:19:22.772 ], 00:19:22.772 "dhchap_digests": [ 00:19:22.772 "sha256", 00:19:22.772 "sha384", 00:19:22.772 "sha512" 00:19:22.772 ], 00:19:22.772 "discovery_filter": "match_any" 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "nvmf_set_max_subsystems", 00:19:22.772 "params": { 00:19:22.772 "max_subsystems": 1024 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "nvmf_set_crdt", 00:19:22.772 "params": { 00:19:22.772 "crdt1": 0, 00:19:22.772 "crdt2": 0, 00:19:22.772 "crdt3": 0 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "nvmf_create_transport", 00:19:22.772 "params": { 00:19:22.772 "abort_timeout_sec": 1, 00:19:22.772 "ack_timeout": 0, 00:19:22.772 "buf_cache_size": 4294967295, 00:19:22.772 "c2h_success": false, 00:19:22.772 "data_wr_pool_size": 0, 00:19:22.772 "dif_insert_or_strip": false, 00:19:22.772 "in_capsule_data_size": 4096, 00:19:22.772 "io_unit_size": 131072, 00:19:22.772 "max_aq_depth": 128, 00:19:22.772 "max_io_qpairs_per_ctrlr": 127, 00:19:22.772 "max_io_size": 131072, 00:19:22.772 "max_queue_depth": 128, 00:19:22.772 "num_shared_buffers": 511, 00:19:22.772 "sock_priority": 0, 00:19:22.772 "trtype": "TCP", 00:19:22.772 "zcopy": false 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "nvmf_create_subsystem", 00:19:22.772 "params": { 00:19:22.772 "allow_any_host": false, 00:19:22.772 "ana_reporting": false, 00:19:22.772 "max_cntlid": 65519, 00:19:22.772 "max_namespaces": 32, 00:19:22.772 "min_cntlid": 1, 00:19:22.772 "model_number": "SPDK bdev Controller", 00:19:22.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.772 "serial_number": "00000000000000000000" 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "nvmf_subsystem_add_host", 00:19:22.772 "params": { 00:19:22.772 "host": "nqn.2016-06.io.spdk:host1", 00:19:22.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.772 "psk": "key0" 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "nvmf_subsystem_add_ns", 00:19:22.772 "params": { 00:19:22.772 "namespace": { 00:19:22.772 "bdev_name": "malloc0", 00:19:22.772 "nguid": "ABA5FC86E8BA466C8AF99B3F3A29579A", 00:19:22.772 "no_auto_visible": false, 00:19:22.772 "nsid": 1, 00:19:22.772 "uuid": "aba5fc86-e8ba-466c-8af9-9b3f3a29579a" 00:19:22.772 }, 00:19:22.772 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:22.772 } 00:19:22.772 }, 00:19:22.772 { 00:19:22.772 "method": "nvmf_subsystem_add_listener", 00:19:22.772 "params": { 00:19:22.772 "listen_address": { 00:19:22.772 "adrfam": "IPv4", 00:19:22.772 "traddr": "10.0.0.3", 00:19:22.772 "trsvcid": "4420", 00:19:22.772 "trtype": "TCP" 00:19:22.772 }, 00:19:22.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.772 "secure_channel": false, 00:19:22.772 "sock_impl": "ssl" 00:19:22.772 } 00:19:22.772 } 00:19:22.772 ] 00:19:22.772 } 00:19:22.772 ] 00:19:22.772 }' 00:19:22.772 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:23.033 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:23.033 "subsystems": [ 00:19:23.033 { 00:19:23.033 "subsystem": "keyring", 00:19:23.033 "config": [ 00:19:23.033 { 00:19:23.033 "method": "keyring_file_add_key", 00:19:23.033 "params": { 00:19:23.033 "name": "key0", 00:19:23.033 "path": "/tmp/tmp.pyYf7ozTxW" 00:19:23.033 } 00:19:23.033 } 00:19:23.033 ] 00:19:23.033 }, 00:19:23.033 { 00:19:23.033 "subsystem": "iobuf", 00:19:23.033 "config": [ 00:19:23.033 { 00:19:23.033 "method": "iobuf_set_options", 00:19:23.033 "params": { 00:19:23.033 "enable_numa": false, 00:19:23.033 "large_bufsize": 135168, 00:19:23.033 "large_pool_count": 1024, 00:19:23.033 "small_bufsize": 8192, 00:19:23.033 "small_pool_count": 8192 00:19:23.033 } 00:19:23.033 } 00:19:23.033 ] 00:19:23.033 }, 00:19:23.033 { 00:19:23.033 "subsystem": "sock", 00:19:23.033 "config": [ 00:19:23.033 { 00:19:23.033 "method": "sock_set_default_impl", 00:19:23.033 "params": { 00:19:23.033 "impl_name": "posix" 00:19:23.033 } 00:19:23.033 }, 00:19:23.033 { 00:19:23.033 "method": "sock_impl_set_options", 00:19:23.033 "params": { 00:19:23.033 "enable_ktls": false, 00:19:23.033 "enable_placement_id": 0, 00:19:23.033 "enable_quickack": false, 00:19:23.033 "enable_recv_pipe": true, 00:19:23.033 "enable_zerocopy_send_client": false, 00:19:23.033 "enable_zerocopy_send_server": true, 00:19:23.033 "impl_name": "ssl", 00:19:23.033 "recv_buf_size": 4096, 00:19:23.033 "send_buf_size": 4096, 00:19:23.033 "tls_version": 0, 00:19:23.033 "zerocopy_threshold": 0 00:19:23.033 } 00:19:23.033 }, 00:19:23.033 { 00:19:23.033 "method": "sock_impl_set_options", 00:19:23.033 "params": { 00:19:23.033 "enable_ktls": false, 00:19:23.033 "enable_placement_id": 0, 00:19:23.033 "enable_quickack": false, 00:19:23.033 "enable_recv_pipe": true, 00:19:23.033 "enable_zerocopy_send_client": false, 00:19:23.033 "enable_zerocopy_send_server": true, 00:19:23.033 "impl_name": "posix", 00:19:23.033 "recv_buf_size": 2097152, 00:19:23.033 "send_buf_size": 2097152, 00:19:23.033 "tls_version": 0, 00:19:23.033 "zerocopy_threshold": 0 00:19:23.033 } 00:19:23.033 } 00:19:23.033 ] 00:19:23.033 }, 00:19:23.033 { 00:19:23.033 "subsystem": "vmd", 00:19:23.033 "config": [] 00:19:23.033 }, 00:19:23.033 { 00:19:23.033 "subsystem": "accel", 00:19:23.033 "config": [ 00:19:23.033 { 00:19:23.033 "method": "accel_set_options", 00:19:23.033 "params": { 00:19:23.033 "buf_count": 2048, 00:19:23.033 "large_cache_size": 16, 00:19:23.033 "sequence_count": 2048, 00:19:23.033 "small_cache_size": 128, 00:19:23.033 "task_count": 2048 00:19:23.033 } 00:19:23.033 } 00:19:23.033 ] 00:19:23.033 }, 00:19:23.033 { 00:19:23.033 "subsystem": "bdev", 00:19:23.033 "config": [ 00:19:23.033 { 00:19:23.033 "method": "bdev_set_options", 00:19:23.033 "params": { 00:19:23.033 "bdev_auto_examine": true, 00:19:23.033 "bdev_io_cache_size": 256, 00:19:23.033 "bdev_io_pool_size": 65535, 00:19:23.033 "iobuf_large_cache_size": 16, 00:19:23.033 "iobuf_small_cache_size": 128 00:19:23.033 } 00:19:23.033 }, 00:19:23.033 { 00:19:23.033 "method": "bdev_raid_set_options", 00:19:23.033 "params": { 00:19:23.034 "process_max_bandwidth_mb_sec": 0, 00:19:23.034 "process_window_size_kb": 1024 00:19:23.034 } 00:19:23.034 }, 00:19:23.034 { 00:19:23.034 "method": "bdev_iscsi_set_options", 00:19:23.034 "params": { 00:19:23.034 "timeout_sec": 30 00:19:23.034 } 00:19:23.034 }, 00:19:23.034 { 00:19:23.034 "method": "bdev_nvme_set_options", 00:19:23.034 "params": { 00:19:23.034 "action_on_timeout": "none", 00:19:23.034 "allow_accel_sequence": false, 00:19:23.034 "arbitration_burst": 0, 00:19:23.034 "bdev_retry_count": 3, 00:19:23.034 "ctrlr_loss_timeout_sec": 0, 00:19:23.034 "delay_cmd_submit": true, 00:19:23.034 "dhchap_dhgroups": [ 00:19:23.034 "null", 00:19:23.034 "ffdhe2048", 00:19:23.034 "ffdhe3072", 00:19:23.034 "ffdhe4096", 00:19:23.034 "ffdhe6144", 00:19:23.034 "ffdhe8192" 00:19:23.034 ], 00:19:23.034 "dhchap_digests": [ 00:19:23.034 "sha256", 00:19:23.034 "sha384", 00:19:23.034 "sha512" 00:19:23.034 ], 00:19:23.034 "disable_auto_failback": false, 00:19:23.034 "fast_io_fail_timeout_sec": 0, 00:19:23.034 "generate_uuids": false, 00:19:23.034 "high_priority_weight": 0, 00:19:23.034 "io_path_stat": false, 00:19:23.034 "io_queue_requests": 512, 00:19:23.034 "keep_alive_timeout_ms": 10000, 00:19:23.034 "low_priority_weight": 0, 00:19:23.034 "medium_priority_weight": 0, 00:19:23.034 "nvme_adminq_poll_period_us": 10000, 00:19:23.034 "nvme_error_stat": false, 00:19:23.034 "nvme_ioq_poll_period_us": 0, 00:19:23.034 "rdma_cm_event_timeout_ms": 0, 00:19:23.034 "rdma_max_cq_size": 0, 00:19:23.034 "rdma_srq_size": 0, 00:19:23.034 "reconnect_delay_sec": 0, 00:19:23.034 "timeout_admin_us": 0, 00:19:23.034 "timeout_us": 0, 00:19:23.034 "transport_ack_timeout": 0, 00:19:23.034 "transport_retry_count": 4, 00:19:23.034 "transport_tos": 0 00:19:23.034 } 00:19:23.034 }, 00:19:23.034 { 00:19:23.034 "method": "bdev_nvme_attach_controller", 00:19:23.034 "params": { 00:19:23.034 "adrfam": "IPv4", 00:19:23.034 "ctrlr_loss_timeout_sec": 0, 00:19:23.034 "ddgst": false, 00:19:23.034 "fast_io_fail_timeout_sec": 0, 00:19:23.034 "hdgst": false, 00:19:23.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.034 "multipath": "multipath", 00:19:23.034 "name": "nvme0", 00:19:23.034 "prchk_guard": false, 00:19:23.034 "prchk_reftag": false, 00:19:23.034 "psk": "key0", 00:19:23.034 "reconnect_delay_sec": 0, 00:19:23.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.034 "traddr": "10.0.0.3", 00:19:23.034 "trsvcid": "4420", 00:19:23.034 "trtype": "TCP" 00:19:23.034 } 00:19:23.034 }, 00:19:23.034 { 00:19:23.034 "method": "bdev_nvme_set_hotplug", 00:19:23.034 "params": { 00:19:23.034 "enable": false, 00:19:23.034 "period_us": 100000 00:19:23.034 } 00:19:23.034 }, 00:19:23.034 { 00:19:23.034 "method": "bdev_enable_histogram", 00:19:23.034 "params": { 00:19:23.034 "enable": true, 00:19:23.034 "name": "nvme0n1" 00:19:23.034 } 00:19:23.034 }, 00:19:23.034 { 00:19:23.034 "method": "bdev_wait_for_examine" 00:19:23.034 } 00:19:23.034 ] 00:19:23.034 }, 00:19:23.034 { 00:19:23.034 "subsystem": "nbd", 00:19:23.034 "config": [] 00:19:23.034 } 00:19:23.034 ] 00:19:23.034 }' 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84671 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84671 ']' 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84671 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84671 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84671' 00:19:23.034 killing process with pid 84671 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84671 00:19:23.034 Received shutdown signal, test time was about 1.000000 seconds 00:19:23.034 00:19:23.034 Latency(us) 00:19:23.034 [2024-11-06T13:49:03.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.034 [2024-11-06T13:49:03.967Z] =================================================================================================================== 00:19:23.034 [2024-11-06T13:49:03.967Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.034 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84671 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84621 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84621 ']' 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84621 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84621 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:23.294 killing process with pid 84621 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84621' 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84621 00:19:23.294 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84621 00:19:23.554 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:23.554 "subsystems": [ 00:19:23.554 { 00:19:23.554 "subsystem": "keyring", 00:19:23.554 "config": [ 00:19:23.554 { 00:19:23.554 "method": "keyring_file_add_key", 00:19:23.554 "params": { 00:19:23.554 "name": "key0", 00:19:23.554 "path": "/tmp/tmp.pyYf7ozTxW" 00:19:23.554 } 00:19:23.554 } 00:19:23.554 ] 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "subsystem": "iobuf", 00:19:23.554 "config": [ 00:19:23.554 { 00:19:23.554 "method": "iobuf_set_options", 00:19:23.554 "params": { 00:19:23.554 "enable_numa": false, 00:19:23.554 "large_bufsize": 135168, 00:19:23.554 "large_pool_count": 1024, 00:19:23.554 "small_bufsize": 8192, 00:19:23.554 "small_pool_count": 8192 00:19:23.554 } 00:19:23.554 } 00:19:23.554 ] 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "subsystem": "sock", 00:19:23.554 "config": [ 00:19:23.554 { 00:19:23.554 "method": "sock_set_default_impl", 00:19:23.554 "params": { 00:19:23.554 "impl_name": "posix" 00:19:23.554 } 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "method": "sock_impl_set_options", 00:19:23.554 "params": { 00:19:23.554 "enable_ktls": false, 00:19:23.554 "enable_placement_id": 0, 00:19:23.554 "enable_quickack": false, 00:19:23.554 "enable_recv_pipe": true, 00:19:23.554 "enable_zerocopy_send_client": false, 00:19:23.554 "enable_zerocopy_send_server": true, 00:19:23.554 "impl_name": "ssl", 00:19:23.554 "recv_buf_size": 4096, 00:19:23.554 "send_buf_size": 4096, 00:19:23.554 "tls_version": 0, 00:19:23.554 "zerocopy_threshold": 0 00:19:23.554 } 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "method": "sock_impl_set_options", 00:19:23.554 "params": { 00:19:23.554 "enable_ktls": false, 00:19:23.554 "enable_placement_id": 0, 00:19:23.554 "enable_quickack": false, 00:19:23.554 "enable_recv_pipe": true, 00:19:23.554 "enable_zerocopy_send_client": false, 00:19:23.554 "enable_zerocopy_send_server": true, 00:19:23.554 "impl_name": "posix", 00:19:23.554 "recv_buf_size": 2097152, 00:19:23.554 "send_buf_size": 2097152, 00:19:23.554 "tls_version": 0, 00:19:23.554 "zerocopy_threshold": 0 00:19:23.554 } 00:19:23.554 } 00:19:23.554 ] 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "subsystem": "vmd", 00:19:23.554 "config": [] 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "subsystem": "accel", 00:19:23.554 "config": [ 00:19:23.554 { 00:19:23.554 "method": "accel_set_options", 00:19:23.554 "params": { 00:19:23.554 "buf_count": 2048, 00:19:23.554 "large_cache_size": 16, 00:19:23.554 "sequence_count": 2048, 00:19:23.554 "small_cache_size": 128, 00:19:23.554 "task_count": 2048 00:19:23.554 } 00:19:23.554 } 00:19:23.554 ] 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "subsystem": "bdev", 00:19:23.554 "config": [ 00:19:23.554 { 00:19:23.554 "method": "bdev_set_options", 00:19:23.554 "params": { 00:19:23.554 "bdev_auto_examine": true, 00:19:23.554 "bdev_io_cache_size": 256, 00:19:23.554 "bdev_io_pool_size": 65535, 00:19:23.554 "iobuf_large_cache_size": 16, 00:19:23.554 "iobuf_small_cache_size": 128 00:19:23.554 } 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "method": "bdev_raid_set_options", 00:19:23.554 "params": { 00:19:23.554 "process_max_bandwidth_mb_sec": 0, 00:19:23.554 "process_window_size_kb": 1024 00:19:23.554 } 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "method": "bdev_iscsi_set_options", 00:19:23.554 "params": { 00:19:23.554 "timeout_sec": 30 00:19:23.554 } 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "method": "bdev_nvme_set_options", 00:19:23.554 "params": { 00:19:23.554 "action_on_timeout": "none", 00:19:23.554 "allow_accel_sequence": false, 00:19:23.554 "arbitration_burst": 0, 00:19:23.554 "bdev_retry_count": 3, 00:19:23.554 "ctrlr_loss_timeout_sec": 0, 00:19:23.554 "delay_cmd_submit": true, 00:19:23.554 "dhchap_dhgroups": [ 00:19:23.554 "null", 00:19:23.554 "ffdhe2048", 00:19:23.554 "ffdhe3072", 00:19:23.554 "ffdhe4096", 00:19:23.554 "ffdhe6144", 00:19:23.554 "ffdhe8192" 00:19:23.554 ], 00:19:23.554 "dhchap_digests": [ 00:19:23.554 "sha256", 00:19:23.554 "sha384", 00:19:23.554 "sha512" 00:19:23.554 ], 00:19:23.554 "disable_auto_failback": false, 00:19:23.554 "fast_io_fail_timeout_sec": 0, 00:19:23.554 "generate_uuids": false, 00:19:23.554 "high_priority_weight": 0, 00:19:23.554 "io_path_stat": false, 00:19:23.554 "io_queue_requests": 0, 00:19:23.554 "keep_alive_timeout_ms": 10000, 00:19:23.554 "low_priority_weight": 0, 00:19:23.554 "medium_priority_weight": 0, 00:19:23.554 "nvme_adminq_poll_period_us": 10000, 00:19:23.554 "nvme_error_stat": false, 00:19:23.554 "nvme_ioq_poll_period_us": 0, 00:19:23.554 "rdma_cm_event_timeout_ms": 0, 00:19:23.554 "rdma_max_cq_size": 0, 00:19:23.554 "rdma_srq_size": 0, 00:19:23.554 "reconnect_delay_sec": 0, 00:19:23.554 "timeout_admin_us": 0, 00:19:23.554 "timeout_us": 0, 00:19:23.554 "transport_ack_timeout": 0, 00:19:23.554 "transport_retry_count": 4, 00:19:23.554 "transport_tos": 0 00:19:23.554 } 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "method": "bdev_nvme_set_hotplug", 00:19:23.554 "params": { 00:19:23.554 "enable": false, 00:19:23.554 "period_us": 100000 00:19:23.554 } 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "method": "bdev_malloc_create", 00:19:23.554 "params": { 00:19:23.554 "block_size": 4096, 00:19:23.554 "dif_is_head_of_md": false, 00:19:23.554 "dif_pi_format": 0, 00:19:23.554 "dif_type": 0, 00:19:23.554 "md_size": 0, 00:19:23.554 "name": "malloc0", 00:19:23.554 "num_blocks": 8192, 00:19:23.554 "optimal_io_boundary": 0, 00:19:23.554 "physical_block_size": 4096, 00:19:23.554 "uuid": "aba5fc86-e8ba-466c-8af9-9b3f3a29579a" 00:19:23.554 } 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "method": "bdev_wait_for_examine" 00:19:23.554 } 00:19:23.554 ] 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "subsystem": "nbd", 00:19:23.554 "config": [] 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "subsystem": "scheduler", 00:19:23.554 "config": [ 00:19:23.554 { 00:19:23.554 "method": "framework_set_scheduler", 00:19:23.554 "params": { 00:19:23.554 "name": "static" 00:19:23.554 } 00:19:23.554 } 00:19:23.554 ] 00:19:23.554 }, 00:19:23.554 { 00:19:23.554 "subsystem": "nvmf", 00:19:23.554 "config": [ 00:19:23.554 { 00:19:23.554 "method": "nvmf_set_config", 00:19:23.554 "params": { 00:19:23.554 "admin_cmd_passthru": { 00:19:23.554 "identify_ctrlr": false 00:19:23.555 }, 00:19:23.555 "dhchap_dhgroups": [ 00:19:23.555 "null", 00:19:23.555 "ffdhe2048", 00:19:23.555 "ffdhe3072", 00:19:23.555 "ffdhe4096", 00:19:23.555 "ffdhe6144", 00:19:23.555 "ffdhe8192" 00:19:23.555 ], 00:19:23.555 "dhchap_digests": [ 00:19:23.555 "sha256", 00:19:23.555 "sha384", 00:19:23.555 "sha512" 00:19:23.555 ], 00:19:23.555 "discovery_filter": "match_any" 00:19:23.555 } 00:19:23.555 }, 00:19:23.555 { 00:19:23.555 "method": "nvmf_set_max_subsystems", 00:19:23.555 "params": { 00:19:23.555 "max_subsystems": 1024 00:19:23.555 } 00:19:23.555 }, 00:19:23.555 { 00:19:23.555 "method": "nvmf_set_crdt", 00:19:23.555 "params": { 00:19:23.555 "crdt1": 0, 00:19:23.555 "crdt2": 0, 00:19:23.555 "crdt3": 0 00:19:23.555 } 00:19:23.555 }, 00:19:23.555 { 00:19:23.555 "method": "nvmf_create_transport", 00:19:23.555 "params": { 00:19:23.555 "abort_timeout_sec": 1, 00:19:23.555 "ack_timeout": 0, 00:19:23.555 "buf_cache_size": 4294967295, 00:19:23.555 "c2h_success": false, 00:19:23.555 "data_wr_pool_size": 0, 00:19:23.555 "dif_insert_or_strip": false, 00:19:23.555 "in_capsule_data_size": 4096, 00:19:23.555 "io_unit_size": 131072, 00:19:23.555 "max_aq_depth": 128, 00:19:23.555 "max_io_qpairs_per_ctrlr": 127, 00:19:23.555 "max_io_size": 131072, 00:19:23.555 "max_queue_depth": 128, 00:19:23.555 "num_shared_buffers": 511, 00:19:23.555 "sock_priority": 0, 00:19:23.555 "trtype": "TCP", 00:19:23.555 "zcopy": false 00:19:23.555 } 00:19:23.555 }, 00:19:23.555 { 00:19:23.555 "method": "nvmf_create_subsystem", 00:19:23.555 "params": { 00:19:23.555 "allow_any_host": false, 00:19:23.555 "ana_reporting": false, 00:19:23.555 "max_cntlid": 65519, 00:19:23.555 "max_namespaces": 32, 00:19:23.555 "min_cntlid": 1, 00:19:23.555 "model_number": "SPDK bdev Controller", 00:19:23.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.555 "serial_number": "00000000000000000000" 00:19:23.555 } 00:19:23.555 }, 00:19:23.555 { 00:19:23.555 "method": "nvmf_subsystem_add_host", 00:19:23.555 "params": { 00:19:23.555 "host": "nqn.2016-06.io.spdk:host1", 00:19:23.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.555 "psk": "key0" 00:19:23.555 } 00:19:23.555 }, 00:19:23.555 { 00:19:23.555 "method": "nvmf_subsystem_add_ns", 00:19:23.555 "params": { 00:19:23.555 "namespace": { 00:19:23.555 "bdev_name": "malloc0", 00:19:23.555 "nguid": "ABA5FC86E8BA466C8AF99B3F3A29579A", 00:19:23.555 "no_auto_visible": false, 00:19:23.555 "nsid": 1, 00:19:23.555 "uuid": "aba5fc86-e8ba-466c-8af9-9b3f3a29579a" 00:19:23.555 }, 00:19:23.555 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:23.555 } 00:19:23.555 }, 00:19:23.555 { 00:19:23.555 "method": "nvmf_subsystem_add_listener", 00:19:23.555 "params": { 00:19:23.555 "listen_address": { 00:19:23.555 "adrfam": "IPv4", 00:19:23.555 "traddr": "10.0.0.3", 00:19:23.555 "trsvcid": "4420", 00:19:23.555 "trtype": "TCP" 00:19:23.555 }, 00:19:23.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.555 "secure_channel": false, 00:19:23.555 "sock_impl": "ssl" 00:19:23.555 } 00:19:23.555 } 00:19:23.555 ] 00:19:23.555 } 00:19:23.555 ] 00:19:23.555 }' 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84756 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84756 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84756 ']' 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.555 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.555 [2024-11-06 13:49:04.439846] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:23.555 [2024-11-06 13:49:04.439929] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.814 [2024-11-06 13:49:04.591468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.814 [2024-11-06 13:49:04.645691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.814 [2024-11-06 13:49:04.645748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.814 [2024-11-06 13:49:04.645757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.814 [2024-11-06 13:49:04.645765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.814 [2024-11-06 13:49:04.645787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.814 [2024-11-06 13:49:04.646222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.073 [2024-11-06 13:49:04.916433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.073 [2024-11-06 13:49:04.948373] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.073 [2024-11-06 13:49:04.948604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84800 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84800 /var/tmp/bdevperf.sock 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84800 ']' 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.641 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:24.641 "subsystems": [ 00:19:24.641 { 00:19:24.641 "subsystem": "keyring", 00:19:24.641 "config": [ 00:19:24.641 { 00:19:24.641 "method": "keyring_file_add_key", 00:19:24.641 "params": { 00:19:24.641 "name": "key0", 00:19:24.641 "path": "/tmp/tmp.pyYf7ozTxW" 00:19:24.641 } 00:19:24.641 } 00:19:24.641 ] 00:19:24.641 }, 00:19:24.641 { 00:19:24.641 "subsystem": "iobuf", 00:19:24.641 "config": [ 00:19:24.641 { 00:19:24.641 "method": "iobuf_set_options", 00:19:24.641 "params": { 00:19:24.641 "enable_numa": false, 00:19:24.641 "large_bufsize": 135168, 00:19:24.641 "large_pool_count": 1024, 00:19:24.641 "small_bufsize": 8192, 00:19:24.641 "small_pool_count": 8192 00:19:24.641 } 00:19:24.641 } 00:19:24.641 ] 00:19:24.641 }, 00:19:24.641 { 00:19:24.641 "subsystem": "sock", 00:19:24.641 "config": [ 00:19:24.641 { 00:19:24.641 "method": "sock_set_default_impl", 00:19:24.641 "params": { 00:19:24.641 "impl_name": "posix" 00:19:24.641 } 00:19:24.641 }, 00:19:24.641 { 00:19:24.641 "method": "sock_impl_set_options", 00:19:24.641 "params": { 00:19:24.641 "enable_ktls": false, 00:19:24.641 "enable_placement_id": 0, 00:19:24.641 "enable_quickack": false, 00:19:24.641 "enable_recv_pipe": true, 00:19:24.641 "enable_zerocopy_send_client": false, 00:19:24.641 "enable_zerocopy_send_server": true, 00:19:24.641 "impl_name": "ssl", 00:19:24.641 "recv_buf_size": 4096, 00:19:24.641 "send_buf_size": 4096, 00:19:24.641 "tls_version": 0, 00:19:24.641 "zerocopy_threshold": 0 00:19:24.641 } 00:19:24.641 }, 00:19:24.641 { 00:19:24.641 "method": "sock_impl_set_options", 00:19:24.641 "params": { 00:19:24.641 "enable_ktls": false, 00:19:24.641 "enable_placement_id": 0, 00:19:24.641 "enable_quickack": false, 00:19:24.641 "enable_recv_pipe": true, 00:19:24.641 "enable_zerocopy_send_client": false, 00:19:24.641 "enable_zerocopy_send_server": true, 00:19:24.641 "impl_name": "posix", 00:19:24.641 "recv_buf_size": 2097152, 00:19:24.641 "send_buf_size": 2097152, 00:19:24.641 "tls_version": 0, 00:19:24.641 "zerocopy_threshold": 0 00:19:24.641 } 00:19:24.641 } 00:19:24.641 ] 00:19:24.641 }, 00:19:24.641 { 00:19:24.641 "subsystem": "vmd", 00:19:24.641 "config": [] 00:19:24.641 }, 00:19:24.641 { 00:19:24.641 "subsystem": "accel", 00:19:24.641 "config": [ 00:19:24.641 { 00:19:24.641 "method": "accel_set_options", 00:19:24.641 "params": { 00:19:24.641 "buf_count": 2048, 00:19:24.641 "large_cache_size": 16, 00:19:24.641 "sequence_count": 2048, 00:19:24.641 "small_cache_size": 128, 00:19:24.641 "task_count": 2048 00:19:24.641 } 00:19:24.641 } 00:19:24.641 ] 00:19:24.641 }, 00:19:24.641 { 00:19:24.641 "subsystem": "bdev", 00:19:24.641 "config": [ 00:19:24.641 { 00:19:24.641 "method": "bdev_set_options", 00:19:24.641 "params": { 00:19:24.641 "bdev_auto_examine": true, 00:19:24.641 "bdev_io_cache_size": 256, 00:19:24.641 "bdev_io_pool_size": 65535, 00:19:24.641 "iobuf_large_cache_size": 16, 00:19:24.641 "iobuf_small_cache_size": 128 00:19:24.641 } 00:19:24.641 }, 00:19:24.641 { 00:19:24.641 "method": "bdev_raid_set_options", 00:19:24.641 "params": { 00:19:24.641 "process_max_bandwidth_mb_sec": 0, 00:19:24.641 "process_window_size_kb": 1024 00:19:24.641 } 00:19:24.642 }, 00:19:24.642 { 00:19:24.642 "method": "bdev_iscsi_set_options", 00:19:24.642 "params": { 00:19:24.642 "timeout_sec": 30 00:19:24.642 } 00:19:24.642 }, 00:19:24.642 { 00:19:24.642 "method": "bdev_nvme_set_options", 00:19:24.642 "params": { 00:19:24.642 "action_on_timeout": "none", 00:19:24.642 "allow_accel_sequence": false, 00:19:24.642 "arbitration_burst": 0, 00:19:24.642 "bdev_retry_count": 3, 00:19:24.642 "ctrlr_loss_timeout_sec": 0, 00:19:24.642 "delay_cmd_submit": true, 00:19:24.642 "dhchap_dhgroups": [ 00:19:24.642 "null", 00:19:24.642 "ffdhe2048", 00:19:24.642 "ffdhe3072", 00:19:24.642 "ffdhe4096", 00:19:24.642 "ffdhe6144", 00:19:24.642 "ffdhe8192" 00:19:24.642 ], 00:19:24.642 "dhchap_digests": [ 00:19:24.642 "sha256", 00:19:24.642 "sha384", 00:19:24.642 "sha512" 00:19:24.642 ], 00:19:24.642 "disable_auto_failback": false, 00:19:24.642 "fast_io_fail_timeout_sec": 0, 00:19:24.642 "generate_uuids": false, 00:19:24.642 "high_priority_weight": 0, 00:19:24.642 "io_path_stat": false, 00:19:24.642 "io_queue_requests": 512, 00:19:24.642 "keep_alive_timeout_ms": 10000, 00:19:24.642 "low_priority_weight": 0, 00:19:24.642 "medium_priority_weight": 0, 00:19:24.642 "nvme_adminq_poll_period_us": 10000, 00:19:24.642 "nvme_error_stat": false, 00:19:24.642 "nvme_ioq_poll_period_us": 0, 00:19:24.642 "rdma_cm_event_timeout_ms": 0, 00:19:24.642 "rdma_max_cq_size": 0, 00:19:24.642 "rdma_srq_size": 0, 00:19:24.642 "reconnect_delay_sec": 0, 00:19:24.642 "timeout_admin_us": 0, 00:19:24.642 "timeout_us": 0, 00:19:24.642 "transport_ack_timeout": 0, 00:19:24.642 "transport_retry_count": 4, 00:19:24.642 "transport_tos": 0 00:19:24.642 } 00:19:24.642 }, 00:19:24.642 { 00:19:24.642 "method": "bdev_nvme_attach_controller", 00:19:24.642 "params": { 00:19:24.642 "adrfam": "IPv4", 00:19:24.642 "ctrlr_loss_timeout_sec": 0, 00:19:24.642 "ddgst": false, 00:19:24.642 "fast_io_fail_timeout_sec": 0, 00:19:24.642 "hdgst": false, 00:19:24.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.642 "multipath": "multipath", 00:19:24.642 "name": "nvme0", 00:19:24.642 "prchk_guard": false, 00:19:24.642 "prchk_reftag": false, 00:19:24.642 "psk": "key0", 00:19:24.642 "reconnect_delay_sec": 0, 00:19:24.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.642 "traddr": "10.0.0.3", 00:19:24.642 "trsvcid": "4420", 00:19:24.642 "trtype": "TCP" 00:19:24.642 } 00:19:24.642 }, 00:19:24.642 { 00:19:24.642 "method": "bdev_nvme_set_hotplug", 00:19:24.642 "params": { 00:19:24.642 "enable": false, 00:19:24.642 "period_us": 100000 00:19:24.642 } 00:19:24.642 }, 00:19:24.642 { 00:19:24.642 "method": "bdev_enable_histogram", 00:19:24.642 "params": { 00:19:24.642 "enable": true, 00:19:24.642 "name": "nvme0n1" 00:19:24.642 } 00:19:24.642 }, 00:19:24.642 { 00:19:24.642 "method": "bdev_wait_for_examine" 00:19:24.642 } 00:19:24.642 ] 00:19:24.642 }, 00:19:24.642 { 00:19:24.642 "subsystem": "nbd", 00:19:24.642 "config": [] 00:19:24.642 } 00:19:24.642 ] 00:19:24.642 }' 00:19:24.642 [2024-11-06 13:49:05.430665] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:24.642 [2024-11-06 13:49:05.430744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84800 ] 00:19:24.901 [2024-11-06 13:49:05.584185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.901 [2024-11-06 13:49:05.642095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.163 [2024-11-06 13:49:05.850521] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.422 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:25.422 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:25.422 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:25.422 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:25.681 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.681 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:25.939 Running I/O for 1 seconds... 00:19:26.875 5998.00 IOPS, 23.43 MiB/s 00:19:26.875 Latency(us) 00:19:26.875 [2024-11-06T13:49:07.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.875 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:26.875 Verification LBA range: start 0x0 length 0x2000 00:19:26.875 nvme0n1 : 1.01 6052.33 23.64 0.00 0.00 21000.68 4737.54 15581.25 00:19:26.875 [2024-11-06T13:49:07.808Z] =================================================================================================================== 00:19:26.875 [2024-11-06T13:49:07.808Z] Total : 6052.33 23.64 0.00 0.00 21000.68 4737.54 15581.25 00:19:26.875 { 00:19:26.875 "results": [ 00:19:26.875 { 00:19:26.875 "job": "nvme0n1", 00:19:26.875 "core_mask": "0x2", 00:19:26.875 "workload": "verify", 00:19:26.875 "status": "finished", 00:19:26.875 "verify_range": { 00:19:26.875 "start": 0, 00:19:26.875 "length": 8192 00:19:26.875 }, 00:19:26.875 "queue_depth": 128, 00:19:26.875 "io_size": 4096, 00:19:26.875 "runtime": 1.012172, 00:19:26.875 "iops": 6052.33102674249, 00:19:26.875 "mibps": 23.641918073212853, 00:19:26.875 "io_failed": 0, 00:19:26.875 "io_timeout": 0, 00:19:26.875 "avg_latency_us": 21000.67615470042, 00:19:26.875 "min_latency_us": 4737.542168674699, 00:19:26.875 "max_latency_us": 15581.249799196787 00:19:26.875 } 00:19:26.875 ], 00:19:26.875 "core_count": 1 00:19:26.875 } 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:26.875 nvmf_trace.0 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84800 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84800 ']' 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84800 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:26.875 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84800 00:19:27.135 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:27.135 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:27.135 killing process with pid 84800 00:19:27.135 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84800' 00:19:27.135 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84800 00:19:27.135 Received shutdown signal, test time was about 1.000000 seconds 00:19:27.135 00:19:27.135 Latency(us) 00:19:27.135 [2024-11-06T13:49:08.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.135 [2024-11-06T13:49:08.068Z] =================================================================================================================== 00:19:27.135 [2024-11-06T13:49:08.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.135 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84800 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.394 rmmod nvme_tcp 00:19:27.394 rmmod nvme_fabrics 00:19:27.394 rmmod nvme_keyring 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84756 ']' 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84756 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84756 ']' 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84756 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84756 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84756' 00:19:27.394 killing process with pid 84756 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84756 00:19:27.394 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84756 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:27.653 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QuicOnWvSi /tmp/tmp.5SZlbHaX81 /tmp/tmp.pyYf7ozTxW 00:19:27.912 00:19:27.912 real 1m27.896s 00:19:27.912 user 2m13.630s 00:19:27.912 sys 0m33.433s 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:27.912 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.912 ************************************ 00:19:27.912 END TEST nvmf_tls 00:19:27.912 ************************************ 00:19:28.172 13:49:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:28.172 13:49:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:28.172 13:49:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:28.172 13:49:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.172 ************************************ 00:19:28.172 START TEST nvmf_fips 00:19:28.172 ************************************ 00:19:28.172 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:28.172 * Looking for test storage... 00:19:28.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.172 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:28.432 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.433 --rc genhtml_branch_coverage=1 00:19:28.433 --rc genhtml_function_coverage=1 00:19:28.433 --rc genhtml_legend=1 00:19:28.433 --rc geninfo_all_blocks=1 00:19:28.433 --rc geninfo_unexecuted_blocks=1 00:19:28.433 00:19:28.433 ' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.433 --rc genhtml_branch_coverage=1 00:19:28.433 --rc genhtml_function_coverage=1 00:19:28.433 --rc genhtml_legend=1 00:19:28.433 --rc geninfo_all_blocks=1 00:19:28.433 --rc geninfo_unexecuted_blocks=1 00:19:28.433 00:19:28.433 ' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.433 --rc genhtml_branch_coverage=1 00:19:28.433 --rc genhtml_function_coverage=1 00:19:28.433 --rc genhtml_legend=1 00:19:28.433 --rc geninfo_all_blocks=1 00:19:28.433 --rc geninfo_unexecuted_blocks=1 00:19:28.433 00:19:28.433 ' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.433 --rc genhtml_branch_coverage=1 00:19:28.433 --rc genhtml_function_coverage=1 00:19:28.433 --rc genhtml_legend=1 00:19:28.433 --rc geninfo_all_blocks=1 00:19:28.433 --rc geninfo_unexecuted_blocks=1 00:19:28.433 00:19:28.433 ' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.433 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:28.433 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:28.434 Error setting digest 00:19:28.434 40E21D8E837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:28.434 40E21D8E837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.434 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.693 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:28.693 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:28.694 Cannot find device "nvmf_init_br" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:28.694 Cannot find device "nvmf_init_br2" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:28.694 Cannot find device "nvmf_tgt_br" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:28.694 Cannot find device "nvmf_tgt_br2" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:28.694 Cannot find device "nvmf_init_br" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:28.694 Cannot find device "nvmf_init_br2" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:28.694 Cannot find device "nvmf_tgt_br" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:28.694 Cannot find device "nvmf_tgt_br2" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:28.694 Cannot find device "nvmf_br" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:28.694 Cannot find device "nvmf_init_if" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:28.694 Cannot find device "nvmf_init_if2" 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:28.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:28.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:28.694 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:28.953 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:28.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:28.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:28.954 00:19:28.954 --- 10.0.0.3 ping statistics --- 00:19:28.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.954 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:28.954 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:28.954 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:19:28.954 00:19:28.954 --- 10.0.0.4 ping statistics --- 00:19:28.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.954 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:28.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:28.954 00:19:28.954 --- 10.0.0.1 ping statistics --- 00:19:28.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.954 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:28.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:19:28.954 00:19:28.954 --- 10.0.0.2 ping statistics --- 00:19:28.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.954 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=85141 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 85141 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 85141 ']' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:28.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:28.954 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:29.213 [2024-11-06 13:49:09.958804] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:29.213 [2024-11-06 13:49:09.958878] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.213 [2024-11-06 13:49:10.112505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.472 [2024-11-06 13:49:10.168851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.472 [2024-11-06 13:49:10.168917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.472 [2024-11-06 13:49:10.168927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.472 [2024-11-06 13:49:10.168935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.472 [2024-11-06 13:49:10.168942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.472 [2024-11-06 13:49:10.169333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.038 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:30.038 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:30.038 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.cQz 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.cQz 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.cQz 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.cQz 00:19:30.039 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.297 [2024-11-06 13:49:11.155224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.297 [2024-11-06 13:49:11.171148] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.297 [2024-11-06 13:49:11.171366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:30.297 malloc0 00:19:30.556 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.556 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85201 00:19:30.556 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.557 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85201 /var/tmp/bdevperf.sock 00:19:30.557 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 85201 ']' 00:19:30.557 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.557 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.557 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.557 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.557 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:30.557 [2024-11-06 13:49:11.323044] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:30.557 [2024-11-06 13:49:11.323138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85201 ] 00:19:30.557 [2024-11-06 13:49:11.478534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.815 [2024-11-06 13:49:11.537909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.383 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.383 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:31.383 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.cQz 00:19:31.642 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.642 [2024-11-06 13:49:12.567933] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.900 TLSTESTn1 00:19:31.900 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.900 Running I/O for 10 seconds... 00:19:34.214 5953.00 IOPS, 23.25 MiB/s [2024-11-06T13:49:16.082Z] 5974.00 IOPS, 23.34 MiB/s [2024-11-06T13:49:17.018Z] 6007.67 IOPS, 23.47 MiB/s [2024-11-06T13:49:17.953Z] 6003.00 IOPS, 23.45 MiB/s [2024-11-06T13:49:18.889Z] 6017.20 IOPS, 23.50 MiB/s [2024-11-06T13:49:19.826Z] 6022.17 IOPS, 23.52 MiB/s [2024-11-06T13:49:20.761Z] 6029.14 IOPS, 23.55 MiB/s [2024-11-06T13:49:22.139Z] 6028.88 IOPS, 23.55 MiB/s [2024-11-06T13:49:23.076Z] 6027.56 IOPS, 23.55 MiB/s [2024-11-06T13:49:23.076Z] 6028.70 IOPS, 23.55 MiB/s 00:19:42.143 Latency(us) 00:19:42.143 [2024-11-06T13:49:23.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.143 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.144 Verification LBA range: start 0x0 length 0x2000 00:19:42.144 TLSTESTn1 : 10.01 6034.74 23.57 0.00 0.00 21178.55 3868.99 16212.92 00:19:42.144 [2024-11-06T13:49:23.077Z] =================================================================================================================== 00:19:42.144 [2024-11-06T13:49:23.077Z] Total : 6034.74 23.57 0.00 0.00 21178.55 3868.99 16212.92 00:19:42.144 { 00:19:42.144 "results": [ 00:19:42.144 { 00:19:42.144 "job": "TLSTESTn1", 00:19:42.144 "core_mask": "0x4", 00:19:42.144 "workload": "verify", 00:19:42.144 "status": "finished", 00:19:42.144 "verify_range": { 00:19:42.144 "start": 0, 00:19:42.144 "length": 8192 00:19:42.144 }, 00:19:42.144 "queue_depth": 128, 00:19:42.144 "io_size": 4096, 00:19:42.144 "runtime": 10.010863, 00:19:42.144 "iops": 6034.744457096256, 00:19:42.144 "mibps": 23.57322053553225, 00:19:42.144 "io_failed": 0, 00:19:42.144 "io_timeout": 0, 00:19:42.144 "avg_latency_us": 21178.54533955264, 00:19:42.144 "min_latency_us": 3868.992771084337, 00:19:42.144 "max_latency_us": 16212.922088353414 00:19:42.144 } 00:19:42.144 ], 00:19:42.144 "core_count": 1 00:19:42.144 } 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:42.144 nvmf_trace.0 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85201 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 85201 ']' 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 85201 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85201 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:42.144 killing process with pid 85201 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85201' 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 85201 00:19:42.144 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.144 00:19:42.144 Latency(us) 00:19:42.144 [2024-11-06T13:49:23.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.144 [2024-11-06T13:49:23.077Z] =================================================================================================================== 00:19:42.144 [2024-11-06T13:49:23.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.144 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 85201 00:19:42.402 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:42.402 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.402 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:42.402 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.402 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:42.402 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.402 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.402 rmmod nvme_tcp 00:19:42.402 rmmod nvme_fabrics 00:19:42.402 rmmod nvme_keyring 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 85141 ']' 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 85141 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 85141 ']' 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 85141 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85141 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:42.661 killing process with pid 85141 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85141' 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 85141 00:19:42.661 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 85141 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:42.920 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:43.179 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:43.179 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.179 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.179 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:43.179 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.179 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.180 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.180 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:19:43.180 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.cQz 00:19:43.180 00:19:43.180 real 0m15.092s 00:19:43.180 user 0m19.595s 00:19:43.180 sys 0m6.586s 00:19:43.180 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:43.180 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:43.180 ************************************ 00:19:43.180 END TEST nvmf_fips 00:19:43.180 ************************************ 00:19:43.180 13:49:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:43.180 13:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:43.180 13:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:43.180 13:49:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.180 ************************************ 00:19:43.180 START TEST nvmf_control_msg_list 00:19:43.180 ************************************ 00:19:43.180 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:43.440 * Looking for test storage... 00:19:43.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.440 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:43.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.441 --rc genhtml_branch_coverage=1 00:19:43.441 --rc genhtml_function_coverage=1 00:19:43.441 --rc genhtml_legend=1 00:19:43.441 --rc geninfo_all_blocks=1 00:19:43.441 --rc geninfo_unexecuted_blocks=1 00:19:43.441 00:19:43.441 ' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:43.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.441 --rc genhtml_branch_coverage=1 00:19:43.441 --rc genhtml_function_coverage=1 00:19:43.441 --rc genhtml_legend=1 00:19:43.441 --rc geninfo_all_blocks=1 00:19:43.441 --rc geninfo_unexecuted_blocks=1 00:19:43.441 00:19:43.441 ' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:43.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.441 --rc genhtml_branch_coverage=1 00:19:43.441 --rc genhtml_function_coverage=1 00:19:43.441 --rc genhtml_legend=1 00:19:43.441 --rc geninfo_all_blocks=1 00:19:43.441 --rc geninfo_unexecuted_blocks=1 00:19:43.441 00:19:43.441 ' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:43.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.441 --rc genhtml_branch_coverage=1 00:19:43.441 --rc genhtml_function_coverage=1 00:19:43.441 --rc genhtml_legend=1 00:19:43.441 --rc geninfo_all_blocks=1 00:19:43.441 --rc geninfo_unexecuted_blocks=1 00:19:43.441 00:19:43.441 ' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.441 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:43.441 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:43.442 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:43.442 Cannot find device "nvmf_init_br" 00:19:43.442 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:19:43.442 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:43.442 Cannot find device "nvmf_init_br2" 00:19:43.442 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:19:43.442 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:43.701 Cannot find device "nvmf_tgt_br" 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.701 Cannot find device "nvmf_tgt_br2" 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:43.701 Cannot find device "nvmf_init_br" 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:43.701 Cannot find device "nvmf_init_br2" 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:43.701 Cannot find device "nvmf_tgt_br" 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:19:43.701 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:43.701 Cannot find device "nvmf_tgt_br2" 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:43.702 Cannot find device "nvmf_br" 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:43.702 Cannot find device "nvmf_init_if" 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:43.702 Cannot find device "nvmf_init_if2" 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.702 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:43.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:19:43.961 00:19:43.961 --- 10.0.0.3 ping statistics --- 00:19:43.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.961 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:43.961 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:43.962 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:43.962 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:19:43.962 00:19:43.962 --- 10.0.0.4 ping statistics --- 00:19:43.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.962 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:43.962 00:19:43.962 --- 10.0.0.1 ping statistics --- 00:19:43.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.962 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:43.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:19:43.962 00:19:43.962 --- 10.0.0.2 ping statistics --- 00:19:43.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.962 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85614 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85614 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 85614 ']' 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:43.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:43.962 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:44.222 [2024-11-06 13:49:24.905197] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:44.222 [2024-11-06 13:49:24.905257] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.222 [2024-11-06 13:49:25.055716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.222 [2024-11-06 13:49:25.111803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.222 [2024-11-06 13:49:25.112104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.222 [2024-11-06 13:49:25.112162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.222 [2024-11-06 13:49:25.112220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.222 [2024-11-06 13:49:25.112270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.222 [2024-11-06 13:49:25.112698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:45.160 [2024-11-06 13:49:25.850315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:45.160 Malloc0 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:45.160 [2024-11-06 13:49:25.908731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85664 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85665 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85666 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:45.160 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85664 00:19:45.455 [2024-11-06 13:49:26.098703] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:45.455 [2024-11-06 13:49:26.108987] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:45.455 [2024-11-06 13:49:26.109154] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:46.394 Initializing NVMe Controllers 00:19:46.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:46.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:46.394 Initialization complete. Launching workers. 00:19:46.394 ======================================================== 00:19:46.394 Latency(us) 00:19:46.394 Device Information : IOPS MiB/s Average min max 00:19:46.394 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4920.97 19.22 203.02 86.16 562.15 00:19:46.394 ======================================================== 00:19:46.394 Total : 4920.97 19.22 203.02 86.16 562.15 00:19:46.394 00:19:46.394 Initializing NVMe Controllers 00:19:46.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:46.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:46.394 Initialization complete. Launching workers. 00:19:46.394 ======================================================== 00:19:46.394 Latency(us) 00:19:46.394 Device Information : IOPS MiB/s Average min max 00:19:46.394 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4887.00 19.09 204.40 104.02 558.68 00:19:46.394 ======================================================== 00:19:46.394 Total : 4887.00 19.09 204.40 104.02 558.68 00:19:46.394 00:19:46.394 Initializing NVMe Controllers 00:19:46.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:46.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:46.394 Initialization complete. Launching workers. 00:19:46.394 ======================================================== 00:19:46.394 Latency(us) 00:19:46.394 Device Information : IOPS MiB/s Average min max 00:19:46.394 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4880.97 19.07 204.65 103.06 561.57 00:19:46.394 ======================================================== 00:19:46.394 Total : 4880.97 19.07 204.65 103.06 561.57 00:19:46.394 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85665 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85666 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.394 rmmod nvme_tcp 00:19:46.394 rmmod nvme_fabrics 00:19:46.394 rmmod nvme_keyring 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85614 ']' 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85614 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 85614 ']' 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 85614 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:19:46.394 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:46.653 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85614 00:19:46.653 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:46.653 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:46.653 killing process with pid 85614 00:19:46.653 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85614' 00:19:46.653 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 85614 00:19:46.653 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 85614 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:46.913 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:19:47.173 00:19:47.173 real 0m3.906s 00:19:47.173 user 0m5.242s 00:19:47.173 sys 0m2.009s 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:47.173 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.173 ************************************ 00:19:47.173 END TEST nvmf_control_msg_list 00:19:47.173 ************************************ 00:19:47.173 13:49:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:47.173 13:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:47.173 13:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:47.173 13:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.173 ************************************ 00:19:47.173 START TEST nvmf_wait_for_buf 00:19:47.173 ************************************ 00:19:47.173 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:47.434 * Looking for test storage... 00:19:47.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.434 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:47.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.435 --rc genhtml_branch_coverage=1 00:19:47.435 --rc genhtml_function_coverage=1 00:19:47.435 --rc genhtml_legend=1 00:19:47.435 --rc geninfo_all_blocks=1 00:19:47.435 --rc geninfo_unexecuted_blocks=1 00:19:47.435 00:19:47.435 ' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:47.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.435 --rc genhtml_branch_coverage=1 00:19:47.435 --rc genhtml_function_coverage=1 00:19:47.435 --rc genhtml_legend=1 00:19:47.435 --rc geninfo_all_blocks=1 00:19:47.435 --rc geninfo_unexecuted_blocks=1 00:19:47.435 00:19:47.435 ' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:47.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.435 --rc genhtml_branch_coverage=1 00:19:47.435 --rc genhtml_function_coverage=1 00:19:47.435 --rc genhtml_legend=1 00:19:47.435 --rc geninfo_all_blocks=1 00:19:47.435 --rc geninfo_unexecuted_blocks=1 00:19:47.435 00:19:47.435 ' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:47.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.435 --rc genhtml_branch_coverage=1 00:19:47.435 --rc genhtml_function_coverage=1 00:19:47.435 --rc genhtml_legend=1 00:19:47.435 --rc geninfo_all_blocks=1 00:19:47.435 --rc geninfo_unexecuted_blocks=1 00:19:47.435 00:19:47.435 ' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.435 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:47.435 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:47.436 Cannot find device "nvmf_init_br" 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:47.436 Cannot find device "nvmf_init_br2" 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:47.436 Cannot find device "nvmf_tgt_br" 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:19:47.436 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.695 Cannot find device "nvmf_tgt_br2" 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:47.695 Cannot find device "nvmf_init_br" 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:47.695 Cannot find device "nvmf_init_br2" 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:47.695 Cannot find device "nvmf_tgt_br" 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:47.695 Cannot find device "nvmf_tgt_br2" 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:47.695 Cannot find device "nvmf_br" 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:47.695 Cannot find device "nvmf_init_if" 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:47.695 Cannot find device "nvmf_init_if2" 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.695 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.955 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:47.956 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.956 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:19:47.956 00:19:47.956 --- 10.0.0.3 ping statistics --- 00:19:47.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.956 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:47.956 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:47.956 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:19:47.956 00:19:47.956 --- 10.0.0.4 ping statistics --- 00:19:47.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.956 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:47.956 00:19:47.956 --- 10.0.0.1 ping statistics --- 00:19:47.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.956 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:47.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:47.956 00:19:47.956 --- 10.0.0.2 ping statistics --- 00:19:47.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.956 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85902 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85902 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 85902 ']' 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:47.956 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:48.215 [2024-11-06 13:49:28.921685] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:48.215 [2024-11-06 13:49:28.921760] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.215 [2024-11-06 13:49:29.073493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.215 [2024-11-06 13:49:29.133816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.215 [2024-11-06 13:49:29.133887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.215 [2024-11-06 13:49:29.133898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.215 [2024-11-06 13:49:29.133906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.215 [2024-11-06 13:49:29.133913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.215 [2024-11-06 13:49:29.134288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.152 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:49.152 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:19:49.152 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.152 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.152 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.152 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.152 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:49.152 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.153 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.153 Malloc0 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.153 [2024-11-06 13:49:30.026682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:49.153 [2024-11-06 13:49:30.062739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.153 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:49.412 [2024-11-06 13:49:30.256018] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:50.789 Initializing NVMe Controllers 00:19:50.789 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:50.789 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:50.789 Initialization complete. Launching workers. 00:19:50.789 ======================================================== 00:19:50.789 Latency(us) 00:19:50.789 Device Information : IOPS MiB/s Average min max 00:19:50.789 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.00 15.87 32729.77 8032.36 64115.92 00:19:50.789 ======================================================== 00:19:50.789 Total : 127.00 15.87 32729.77 8032.36 64115.92 00:19:50.789 00:19:50.789 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:50.789 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:50.789 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.789 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:50.789 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.789 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:19:50.789 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:19:50.790 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:50.790 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:50.790 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:50.790 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:50.790 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:50.790 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:50.790 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.790 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.790 rmmod nvme_tcp 00:19:50.790 rmmod nvme_fabrics 00:19:50.790 rmmod nvme_keyring 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85902 ']' 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85902 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 85902 ']' 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 85902 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85902 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:51.048 killing process with pid 85902 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85902' 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 85902 00:19:51.048 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 85902 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:51.307 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.566 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.566 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:51.566 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.566 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.566 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.566 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:19:51.566 00:19:51.566 real 0m4.309s 00:19:51.566 user 0m3.371s 00:19:51.566 sys 0m1.224s 00:19:51.566 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:51.566 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.566 ************************************ 00:19:51.567 END TEST nvmf_wait_for_buf 00:19:51.567 ************************************ 00:19:51.567 13:49:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:51.567 13:49:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:19:51.567 13:49:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:51.567 13:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:51.567 13:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:51.567 13:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.567 ************************************ 00:19:51.567 START TEST nvmf_nsid 00:19:51.567 ************************************ 00:19:51.567 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:51.827 * Looking for test storage... 00:19:51.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:51.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.827 --rc genhtml_branch_coverage=1 00:19:51.827 --rc genhtml_function_coverage=1 00:19:51.827 --rc genhtml_legend=1 00:19:51.827 --rc geninfo_all_blocks=1 00:19:51.827 --rc geninfo_unexecuted_blocks=1 00:19:51.827 00:19:51.827 ' 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:51.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.827 --rc genhtml_branch_coverage=1 00:19:51.827 --rc genhtml_function_coverage=1 00:19:51.827 --rc genhtml_legend=1 00:19:51.827 --rc geninfo_all_blocks=1 00:19:51.827 --rc geninfo_unexecuted_blocks=1 00:19:51.827 00:19:51.827 ' 00:19:51.827 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:51.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.827 --rc genhtml_branch_coverage=1 00:19:51.827 --rc genhtml_function_coverage=1 00:19:51.827 --rc genhtml_legend=1 00:19:51.827 --rc geninfo_all_blocks=1 00:19:51.827 --rc geninfo_unexecuted_blocks=1 00:19:51.827 00:19:51.828 ' 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:51.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.828 --rc genhtml_branch_coverage=1 00:19:51.828 --rc genhtml_function_coverage=1 00:19:51.828 --rc genhtml_legend=1 00:19:51.828 --rc geninfo_all_blocks=1 00:19:51.828 --rc geninfo_unexecuted_blocks=1 00:19:51.828 00:19:51.828 ' 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:51.828 Cannot find device "nvmf_init_br" 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:51.828 Cannot find device "nvmf_init_br2" 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:51.828 Cannot find device "nvmf_tgt_br" 00:19:51.828 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:19:51.829 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.093 Cannot find device "nvmf_tgt_br2" 00:19:52.093 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:19:52.093 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:52.093 Cannot find device "nvmf_init_br" 00:19:52.093 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:19:52.093 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:52.093 Cannot find device "nvmf_init_br2" 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:52.094 Cannot find device "nvmf_tgt_br" 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:52.094 Cannot find device "nvmf_tgt_br2" 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:52.094 Cannot find device "nvmf_br" 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:52.094 Cannot find device "nvmf_init_if" 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:52.094 Cannot find device "nvmf_init_if2" 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.094 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.094 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:52.366 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.366 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:19:52.366 00:19:52.366 --- 10.0.0.3 ping statistics --- 00:19:52.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.366 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:52.366 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:52.366 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:19:52.366 00:19:52.366 --- 10.0.0.4 ping statistics --- 00:19:52.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.366 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:19:52.366 00:19:52.366 --- 10.0.0.1 ping statistics --- 00:19:52.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.366 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:52.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:52.366 00:19:52.366 --- 10.0.0.2 ping statistics --- 00:19:52.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.366 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=86193 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 86193 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 86193 ']' 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:52.366 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:52.626 [2024-11-06 13:49:33.343839] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:52.626 [2024-11-06 13:49:33.343938] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.626 [2024-11-06 13:49:33.494212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.626 [2024-11-06 13:49:33.555271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.626 [2024-11-06 13:49:33.555317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.626 [2024-11-06 13:49:33.555327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.626 [2024-11-06 13:49:33.555335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.626 [2024-11-06 13:49:33.555342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.626 [2024-11-06 13:49:33.555693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=86237 00:19:53.565 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3c8d81b4-11b2-40d8-8602-2d33a68734a5 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3b4be4dd-9d46-45f8-96e1-ed2a7bfe9623 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4db264d0-8d98-4258-aecf-8b198124ff2d 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:53.566 null0 00:19:53.566 null1 00:19:53.566 [2024-11-06 13:49:34.339230] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:53.566 [2024-11-06 13:49:34.339298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86237 ] 00:19:53.566 null2 00:19:53.566 [2024-11-06 13:49:34.345935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.566 [2024-11-06 13:49:34.370021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:53.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 86237 /var/tmp/tgt2.sock 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 86237 ']' 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.566 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:53.566 [2024-11-06 13:49:34.490258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.825 [2024-11-06 13:49:34.555393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.085 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.085 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:54.085 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:54.344 [2024-11-06 13:49:35.250322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.344 [2024-11-06 13:49:35.266370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:54.603 nvme0n1 nvme0n2 00:19:54.603 nvme1n1 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:19:54.603 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3c8d81b4-11b2-40d8-8602-2d33a68734a5 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3c8d81b411b240d886022d33a68734a5 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3C8D81B411B240D886022D33A68734A5 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3C8D81B411B240D886022D33A68734A5 == \3\C\8\D\8\1\B\4\1\1\B\2\4\0\D\8\8\6\0\2\2\D\3\3\A\6\8\7\3\4\A\5 ]] 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3b4be4dd-9d46-45f8-96e1-ed2a7bfe9623 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3b4be4dd9d4645f896e1ed2a7bfe9623 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3B4BE4DD9D4645F896E1ED2A7BFE9623 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3B4BE4DD9D4645F896E1ED2A7BFE9623 == \3\B\4\B\E\4\D\D\9\D\4\6\4\5\F\8\9\6\E\1\E\D\2\A\7\B\F\E\9\6\2\3 ]] 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4db264d0-8d98-4258-aecf-8b198124ff2d 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4db264d08d984258aecf8b198124ff2d 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4DB264D08D984258AECF8B198124FF2D 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4DB264D08D984258AECF8B198124FF2D == \4\D\B\2\6\4\D\0\8\D\9\8\4\2\5\8\A\E\C\F\8\B\1\9\8\1\2\4\F\F\2\D ]] 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 86237 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 86237 ']' 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 86237 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:55.982 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86237 00:19:56.242 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:56.242 killing process with pid 86237 00:19:56.242 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:56.242 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86237' 00:19:56.242 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 86237 00:19:56.242 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 86237 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.812 rmmod nvme_tcp 00:19:56.812 rmmod nvme_fabrics 00:19:56.812 rmmod nvme_keyring 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 86193 ']' 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 86193 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 86193 ']' 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 86193 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86193 00:19:56.812 killing process with pid 86193 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86193' 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 86193 00:19:56.812 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 86193 00:19:57.071 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.071 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.071 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.071 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:19:57.071 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:19:57.071 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.071 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.072 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.072 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:57.072 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:57.072 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:57.072 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:19:57.331 00:19:57.331 real 0m5.836s 00:19:57.331 user 0m7.958s 00:19:57.331 sys 0m2.054s 00:19:57.331 ************************************ 00:19:57.331 END TEST nvmf_nsid 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:57.331 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:57.331 ************************************ 00:19:57.591 13:49:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:57.591 00:19:57.591 real 7m10.329s 00:19:57.591 user 16m34.700s 00:19:57.591 sys 1m56.340s 00:19:57.591 ************************************ 00:19:57.591 END TEST nvmf_target_extra 00:19:57.591 ************************************ 00:19:57.591 13:49:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:57.591 13:49:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:57.591 13:49:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:57.591 13:49:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:57.591 13:49:38 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:57.591 13:49:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:57.591 ************************************ 00:19:57.591 START TEST nvmf_host 00:19:57.591 ************************************ 00:19:57.591 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:57.591 * Looking for test storage... 00:19:57.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:57.591 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:57.591 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:19:57.591 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:57.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.851 --rc genhtml_branch_coverage=1 00:19:57.851 --rc genhtml_function_coverage=1 00:19:57.851 --rc genhtml_legend=1 00:19:57.851 --rc geninfo_all_blocks=1 00:19:57.851 --rc geninfo_unexecuted_blocks=1 00:19:57.851 00:19:57.851 ' 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:57.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.851 --rc genhtml_branch_coverage=1 00:19:57.851 --rc genhtml_function_coverage=1 00:19:57.851 --rc genhtml_legend=1 00:19:57.851 --rc geninfo_all_blocks=1 00:19:57.851 --rc geninfo_unexecuted_blocks=1 00:19:57.851 00:19:57.851 ' 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:57.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.851 --rc genhtml_branch_coverage=1 00:19:57.851 --rc genhtml_function_coverage=1 00:19:57.851 --rc genhtml_legend=1 00:19:57.851 --rc geninfo_all_blocks=1 00:19:57.851 --rc geninfo_unexecuted_blocks=1 00:19:57.851 00:19:57.851 ' 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:57.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.851 --rc genhtml_branch_coverage=1 00:19:57.851 --rc genhtml_function_coverage=1 00:19:57.851 --rc genhtml_legend=1 00:19:57.851 --rc geninfo_all_blocks=1 00:19:57.851 --rc geninfo_unexecuted_blocks=1 00:19:57.851 00:19:57.851 ' 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.851 13:49:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.852 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.852 ************************************ 00:19:57.852 START TEST nvmf_multicontroller 00:19:57.852 ************************************ 00:19:57.852 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:58.112 * Looking for test storage... 00:19:58.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.112 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.113 --rc genhtml_branch_coverage=1 00:19:58.113 --rc genhtml_function_coverage=1 00:19:58.113 --rc genhtml_legend=1 00:19:58.113 --rc geninfo_all_blocks=1 00:19:58.113 --rc geninfo_unexecuted_blocks=1 00:19:58.113 00:19:58.113 ' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.113 --rc genhtml_branch_coverage=1 00:19:58.113 --rc genhtml_function_coverage=1 00:19:58.113 --rc genhtml_legend=1 00:19:58.113 --rc geninfo_all_blocks=1 00:19:58.113 --rc geninfo_unexecuted_blocks=1 00:19:58.113 00:19:58.113 ' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.113 --rc genhtml_branch_coverage=1 00:19:58.113 --rc genhtml_function_coverage=1 00:19:58.113 --rc genhtml_legend=1 00:19:58.113 --rc geninfo_all_blocks=1 00:19:58.113 --rc geninfo_unexecuted_blocks=1 00:19:58.113 00:19:58.113 ' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.113 --rc genhtml_branch_coverage=1 00:19:58.113 --rc genhtml_function_coverage=1 00:19:58.113 --rc genhtml_legend=1 00:19:58.113 --rc geninfo_all_blocks=1 00:19:58.113 --rc geninfo_unexecuted_blocks=1 00:19:58.113 00:19:58.113 ' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:58.113 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:58.113 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:58.114 Cannot find device "nvmf_init_br" 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:58.114 Cannot find device "nvmf_init_br2" 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:19:58.114 13:49:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:58.114 Cannot find device "nvmf_tgt_br" 00:19:58.114 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:19:58.114 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.114 Cannot find device "nvmf_tgt_br2" 00:19:58.114 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:19:58.114 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:58.373 Cannot find device "nvmf_init_br" 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:58.373 Cannot find device "nvmf_init_br2" 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:58.373 Cannot find device "nvmf_tgt_br" 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:58.373 Cannot find device "nvmf_tgt_br2" 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:58.373 Cannot find device "nvmf_br" 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:58.373 Cannot find device "nvmf_init_if" 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:58.373 Cannot find device "nvmf_init_if2" 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:19:58.373 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:58.374 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:58.634 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:58.634 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:19:58.634 00:19:58.634 --- 10.0.0.3 ping statistics --- 00:19:58.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.634 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:58.634 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:58.634 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:19:58.634 00:19:58.634 --- 10.0.0.4 ping statistics --- 00:19:58.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.634 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:58.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:58.634 00:19:58.634 --- 10.0.0.1 ping statistics --- 00:19:58.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.634 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:58.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:19:58.634 00:19:58.634 --- 10.0.0.2 ping statistics --- 00:19:58.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.634 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86622 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86622 00:19:58.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 86622 ']' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.634 13:49:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.893 [2024-11-06 13:49:39.586852] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:19:58.893 [2024-11-06 13:49:39.586931] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.893 [2024-11-06 13:49:39.738493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.893 [2024-11-06 13:49:39.796794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.893 [2024-11-06 13:49:39.797080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.893 [2024-11-06 13:49:39.797275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.893 [2024-11-06 13:49:39.797323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.893 [2024-11-06 13:49:39.797349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.893 [2024-11-06 13:49:39.798825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.893 [2024-11-06 13:49:39.799015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.893 [2024-11-06 13:49:39.799018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 [2024-11-06 13:49:40.527243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 Malloc0 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 [2024-11-06 13:49:40.606302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 [2024-11-06 13:49:40.618205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 Malloc1 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86674 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86674 /var/tmp/bdevperf.sock 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 86674 ']' 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.831 13:49:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.768 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.768 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:20:00.768 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:00.768 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.768 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.027 NVMe0n1 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:01.027 1 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.027 2024/11/06 13:49:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:01.027 request: 00:20:01.027 { 00:20:01.027 "method": "bdev_nvme_attach_controller", 00:20:01.027 "params": { 00:20:01.027 "name": "NVMe0", 00:20:01.027 "trtype": "tcp", 00:20:01.027 "traddr": "10.0.0.3", 00:20:01.027 "adrfam": "ipv4", 00:20:01.027 "trsvcid": "4420", 00:20:01.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.027 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:01.027 "hostaddr": "10.0.0.1", 00:20:01.027 "prchk_reftag": false, 00:20:01.027 "prchk_guard": false, 00:20:01.027 "hdgst": false, 00:20:01.027 "ddgst": false, 00:20:01.027 "allow_unrecognized_csi": false 00:20:01.027 } 00:20:01.027 } 00:20:01.027 Got JSON-RPC error response 00:20:01.027 GoRPCClient: error on JSON-RPC call 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:01.027 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.028 2024/11/06 13:49:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:01.028 request: 00:20:01.028 { 00:20:01.028 "method": "bdev_nvme_attach_controller", 00:20:01.028 "params": { 00:20:01.028 "name": "NVMe0", 00:20:01.028 "trtype": "tcp", 00:20:01.028 "traddr": "10.0.0.3", 00:20:01.028 "adrfam": "ipv4", 00:20:01.028 "trsvcid": "4420", 00:20:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:01.028 "hostaddr": "10.0.0.1", 00:20:01.028 "prchk_reftag": false, 00:20:01.028 "prchk_guard": false, 00:20:01.028 "hdgst": false, 00:20:01.028 "ddgst": false, 00:20:01.028 "allow_unrecognized_csi": false 00:20:01.028 } 00:20:01.028 } 00:20:01.028 Got JSON-RPC error response 00:20:01.028 GoRPCClient: error on JSON-RPC call 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.028 2024/11/06 13:49:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:01.028 request: 00:20:01.028 { 00:20:01.028 "method": "bdev_nvme_attach_controller", 00:20:01.028 "params": { 00:20:01.028 "name": "NVMe0", 00:20:01.028 "trtype": "tcp", 00:20:01.028 "traddr": "10.0.0.3", 00:20:01.028 "adrfam": "ipv4", 00:20:01.028 "trsvcid": "4420", 00:20:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.028 "hostaddr": "10.0.0.1", 00:20:01.028 "prchk_reftag": false, 00:20:01.028 "prchk_guard": false, 00:20:01.028 "hdgst": false, 00:20:01.028 "ddgst": false, 00:20:01.028 "multipath": "disable", 00:20:01.028 "allow_unrecognized_csi": false 00:20:01.028 } 00:20:01.028 } 00:20:01.028 Got JSON-RPC error response 00:20:01.028 GoRPCClient: error on JSON-RPC call 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.028 2024/11/06 13:49:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:01.028 request: 00:20:01.028 { 00:20:01.028 "method": "bdev_nvme_attach_controller", 00:20:01.028 "params": { 00:20:01.028 "name": "NVMe0", 00:20:01.028 "trtype": "tcp", 00:20:01.028 "traddr": "10.0.0.3", 00:20:01.028 "adrfam": "ipv4", 00:20:01.028 "trsvcid": "4420", 00:20:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.028 "hostaddr": "10.0.0.1", 00:20:01.028 "prchk_reftag": false, 00:20:01.028 "prchk_guard": false, 00:20:01.028 "hdgst": false, 00:20:01.028 "ddgst": false, 00:20:01.028 "multipath": "failover", 00:20:01.028 "allow_unrecognized_csi": false 00:20:01.028 } 00:20:01.028 } 00:20:01.028 Got JSON-RPC error response 00:20:01.028 GoRPCClient: error on JSON-RPC call 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.028 NVMe0n1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.028 13:49:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.288 00:20:01.288 13:49:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.288 13:49:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:01.288 13:49:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.288 13:49:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:01.288 13:49:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:01.288 13:49:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.288 13:49:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:01.288 13:49:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.226 { 00:20:02.226 "results": [ 00:20:02.226 { 00:20:02.226 "job": "NVMe0n1", 00:20:02.226 "core_mask": "0x1", 00:20:02.226 "workload": "write", 00:20:02.226 "status": "finished", 00:20:02.226 "queue_depth": 128, 00:20:02.226 "io_size": 4096, 00:20:02.226 "runtime": 1.005361, 00:20:02.226 "iops": 25437.62887161925, 00:20:02.226 "mibps": 99.3657377797627, 00:20:02.226 "io_failed": 0, 00:20:02.226 "io_timeout": 0, 00:20:02.226 "avg_latency_us": 5021.1875472171, 00:20:02.226 "min_latency_us": 3000.4433734939757, 00:20:02.226 "max_latency_us": 10948.986345381527 00:20:02.226 } 00:20:02.226 ], 00:20:02.226 "core_count": 1 00:20:02.226 } 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.485 nvme1n1 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.485 nvme1n1 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.485 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86674 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 86674 ']' 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 86674 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86674 00:20:02.744 killing process with pid 86674 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86674' 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 86674 00:20:02.744 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 86674 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:03.142 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:20:03.142 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:03.142 [2024-11-06 13:49:40.751578] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:03.142 [2024-11-06 13:49:40.751657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86674 ] 00:20:03.142 [2024-11-06 13:49:40.885387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.142 [2024-11-06 13:49:40.954607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.142 [2024-11-06 13:49:42.024940] bdev.c:4753:bdev_name_add: *ERROR*: Bdev name b0bec915-bef8-4ef6-9ea5-04cc05290652 already exists 00:20:03.142 [2024-11-06 13:49:42.024995] bdev.c:7962:bdev_register: *ERROR*: Unable to add uuid:b0bec915-bef8-4ef6-9ea5-04cc05290652 alias for bdev NVMe1n1 00:20:03.142 [2024-11-06 13:49:42.025012] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:03.142 Running I/O for 1 seconds... 00:20:03.142 25398.00 IOPS, 99.21 MiB/s 00:20:03.142 Latency(us) 00:20:03.142 [2024-11-06T13:49:44.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.142 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:03.142 NVMe0n1 : 1.01 25437.63 99.37 0.00 0.00 5021.19 3000.44 10948.99 00:20:03.142 [2024-11-06T13:49:44.075Z] =================================================================================================================== 00:20:03.142 [2024-11-06T13:49:44.075Z] Total : 25437.63 99.37 0.00 0.00 5021.19 3000.44 10948.99 00:20:03.142 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.142 00:20:03.142 Latency(us) 00:20:03.142 [2024-11-06T13:49:44.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.142 [2024-11-06T13:49:44.076Z] =================================================================================================================== 00:20:03.143 [2024-11-06T13:49:44.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.143 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.143 rmmod nvme_tcp 00:20:03.143 rmmod nvme_fabrics 00:20:03.143 rmmod nvme_keyring 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86622 ']' 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86622 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 86622 ']' 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 86622 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86622 00:20:03.143 killing process with pid 86622 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86622' 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 86622 00:20:03.143 13:49:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 86622 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:03.403 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:03.662 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:03.662 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:03.662 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.662 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:03.662 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:03.662 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:03.662 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.663 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:20:03.922 00:20:03.922 real 0m5.950s 00:20:03.922 user 0m16.879s 00:20:03.922 sys 0m1.655s 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:03.922 ************************************ 00:20:03.922 END TEST nvmf_multicontroller 00:20:03.922 ************************************ 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.922 ************************************ 00:20:03.922 START TEST nvmf_aer 00:20:03.922 ************************************ 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:03.922 * Looking for test storage... 00:20:03.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:20:03.922 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:04.182 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:04.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.183 --rc genhtml_branch_coverage=1 00:20:04.183 --rc genhtml_function_coverage=1 00:20:04.183 --rc genhtml_legend=1 00:20:04.183 --rc geninfo_all_blocks=1 00:20:04.183 --rc geninfo_unexecuted_blocks=1 00:20:04.183 00:20:04.183 ' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:04.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.183 --rc genhtml_branch_coverage=1 00:20:04.183 --rc genhtml_function_coverage=1 00:20:04.183 --rc genhtml_legend=1 00:20:04.183 --rc geninfo_all_blocks=1 00:20:04.183 --rc geninfo_unexecuted_blocks=1 00:20:04.183 00:20:04.183 ' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:04.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.183 --rc genhtml_branch_coverage=1 00:20:04.183 --rc genhtml_function_coverage=1 00:20:04.183 --rc genhtml_legend=1 00:20:04.183 --rc geninfo_all_blocks=1 00:20:04.183 --rc geninfo_unexecuted_blocks=1 00:20:04.183 00:20:04.183 ' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:04.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.183 --rc genhtml_branch_coverage=1 00:20:04.183 --rc genhtml_function_coverage=1 00:20:04.183 --rc genhtml_legend=1 00:20:04.183 --rc geninfo_all_blocks=1 00:20:04.183 --rc geninfo_unexecuted_blocks=1 00:20:04.183 00:20:04.183 ' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.183 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.183 13:49:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:04.183 Cannot find device "nvmf_init_br" 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:04.183 Cannot find device "nvmf_init_br2" 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:04.183 Cannot find device "nvmf_tgt_br" 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.183 Cannot find device "nvmf_tgt_br2" 00:20:04.183 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:20:04.184 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:04.184 Cannot find device "nvmf_init_br" 00:20:04.184 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:20:04.184 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:04.184 Cannot find device "nvmf_init_br2" 00:20:04.184 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:20:04.184 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:04.443 Cannot find device "nvmf_tgt_br" 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:04.443 Cannot find device "nvmf_tgt_br2" 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:04.443 Cannot find device "nvmf_br" 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:04.443 Cannot find device "nvmf_init_if" 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:04.443 Cannot find device "nvmf_init_if2" 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:04.443 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:04.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:20:04.703 00:20:04.703 --- 10.0.0.3 ping statistics --- 00:20:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.703 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:04.703 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:04.703 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:20:04.703 00:20:04.703 --- 10.0.0.4 ping statistics --- 00:20:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.703 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:20:04.703 00:20:04.703 --- 10.0.0.1 ping statistics --- 00:20:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.703 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:04.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:20:04.703 00:20:04.703 --- 10.0.0.2 ping statistics --- 00:20:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.703 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=87001 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 87001 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 87001 ']' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:04.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.703 13:49:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.703 [2024-11-06 13:49:45.547005] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:04.703 [2024-11-06 13:49:45.547068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.963 [2024-11-06 13:49:45.702173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.963 [2024-11-06 13:49:45.760142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.963 [2024-11-06 13:49:45.760203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.963 [2024-11-06 13:49:45.760223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.963 [2024-11-06 13:49:45.760231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.963 [2024-11-06 13:49:45.760253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.963 [2024-11-06 13:49:45.761616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.963 [2024-11-06 13:49:45.761809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.963 [2024-11-06 13:49:45.761927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.963 [2024-11-06 13:49:45.761932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.531 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.531 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:20:05.531 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:05.531 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:05.531 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.790 [2024-11-06 13:49:46.483015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.790 Malloc0 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.790 [2024-11-06 13:49:46.566793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.790 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.790 [ 00:20:05.790 { 00:20:05.790 "allow_any_host": true, 00:20:05.790 "hosts": [], 00:20:05.790 "listen_addresses": [], 00:20:05.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:05.790 "subtype": "Discovery" 00:20:05.790 }, 00:20:05.790 { 00:20:05.790 "allow_any_host": true, 00:20:05.790 "hosts": [], 00:20:05.790 "listen_addresses": [ 00:20:05.790 { 00:20:05.790 "adrfam": "IPv4", 00:20:05.790 "traddr": "10.0.0.3", 00:20:05.790 "trsvcid": "4420", 00:20:05.790 "trtype": "TCP" 00:20:05.790 } 00:20:05.790 ], 00:20:05.790 "max_cntlid": 65519, 00:20:05.790 "max_namespaces": 2, 00:20:05.791 "min_cntlid": 1, 00:20:05.791 "model_number": "SPDK bdev Controller", 00:20:05.791 "namespaces": [ 00:20:05.791 { 00:20:05.791 "bdev_name": "Malloc0", 00:20:05.791 "name": "Malloc0", 00:20:05.791 "nguid": "6D6BAA2CC3114FB58F0F9134168A4118", 00:20:05.791 "nsid": 1, 00:20:05.791 "uuid": "6d6baa2c-c311-4fb5-8f0f-9134168a4118" 00:20:05.791 } 00:20:05.791 ], 00:20:05.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.791 "serial_number": "SPDK00000000000001", 00:20:05.791 "subtype": "NVMe" 00:20:05.791 } 00:20:05.791 ] 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=87057 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:20:05.791 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.050 Malloc1 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.050 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.050 Asynchronous Event Request test 00:20:06.050 Attaching to 10.0.0.3 00:20:06.050 Attached to 10.0.0.3 00:20:06.050 Registering asynchronous event callbacks... 00:20:06.050 Starting namespace attribute notice tests for all controllers... 00:20:06.050 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:06.050 aer_cb - Changed Namespace 00:20:06.050 Cleaning up... 00:20:06.050 [ 00:20:06.050 { 00:20:06.050 "allow_any_host": true, 00:20:06.050 "hosts": [], 00:20:06.050 "listen_addresses": [], 00:20:06.050 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:06.050 "subtype": "Discovery" 00:20:06.050 }, 00:20:06.050 { 00:20:06.050 "allow_any_host": true, 00:20:06.050 "hosts": [], 00:20:06.050 "listen_addresses": [ 00:20:06.050 { 00:20:06.051 "adrfam": "IPv4", 00:20:06.051 "traddr": "10.0.0.3", 00:20:06.051 "trsvcid": "4420", 00:20:06.051 "trtype": "TCP" 00:20:06.051 } 00:20:06.051 ], 00:20:06.051 "max_cntlid": 65519, 00:20:06.051 "max_namespaces": 2, 00:20:06.051 "min_cntlid": 1, 00:20:06.051 "model_number": "SPDK bdev Controller", 00:20:06.051 "namespaces": [ 00:20:06.051 { 00:20:06.051 "bdev_name": "Malloc0", 00:20:06.051 "name": "Malloc0", 00:20:06.051 "nguid": "6D6BAA2CC3114FB58F0F9134168A4118", 00:20:06.051 "nsid": 1, 00:20:06.051 "uuid": "6d6baa2c-c311-4fb5-8f0f-9134168a4118" 00:20:06.051 }, 00:20:06.051 { 00:20:06.051 "bdev_name": "Malloc1", 00:20:06.051 "name": "Malloc1", 00:20:06.051 "nguid": "4DA99CEA1D454E68A94A10CA030A20A2", 00:20:06.051 "nsid": 2, 00:20:06.051 "uuid": "4da99cea-1d45-4e68-a94a-10ca030a20a2" 00:20:06.051 } 00:20:06.051 ], 00:20:06.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.051 "serial_number": "SPDK00000000000001", 00:20:06.051 "subtype": "NVMe" 00:20:06.051 } 00:20:06.051 ] 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 87057 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.051 13:49:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:06.310 rmmod nvme_tcp 00:20:06.310 rmmod nvme_fabrics 00:20:06.310 rmmod nvme_keyring 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 87001 ']' 00:20:06.310 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 87001 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 87001 ']' 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 87001 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87001 00:20:06.311 killing process with pid 87001 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87001' 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 87001 00:20:06.311 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 87001 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:06.570 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:20:06.830 00:20:06.830 real 0m3.052s 00:20:06.830 user 0m6.959s 00:20:06.830 sys 0m1.038s 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:06.830 ************************************ 00:20:06.830 13:49:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:06.830 END TEST nvmf_aer 00:20:06.830 ************************************ 00:20:07.089 13:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:07.089 13:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:07.089 13:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:07.089 13:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.090 ************************************ 00:20:07.090 START TEST nvmf_async_init 00:20:07.090 ************************************ 00:20:07.090 13:49:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:07.090 * Looking for test storage... 00:20:07.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:07.090 13:49:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:07.090 13:49:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:20:07.090 13:49:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:07.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.349 --rc genhtml_branch_coverage=1 00:20:07.349 --rc genhtml_function_coverage=1 00:20:07.349 --rc genhtml_legend=1 00:20:07.349 --rc geninfo_all_blocks=1 00:20:07.349 --rc geninfo_unexecuted_blocks=1 00:20:07.349 00:20:07.349 ' 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:07.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.349 --rc genhtml_branch_coverage=1 00:20:07.349 --rc genhtml_function_coverage=1 00:20:07.349 --rc genhtml_legend=1 00:20:07.349 --rc geninfo_all_blocks=1 00:20:07.349 --rc geninfo_unexecuted_blocks=1 00:20:07.349 00:20:07.349 ' 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:07.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.349 --rc genhtml_branch_coverage=1 00:20:07.349 --rc genhtml_function_coverage=1 00:20:07.349 --rc genhtml_legend=1 00:20:07.349 --rc geninfo_all_blocks=1 00:20:07.349 --rc geninfo_unexecuted_blocks=1 00:20:07.349 00:20:07.349 ' 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:07.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.349 --rc genhtml_branch_coverage=1 00:20:07.349 --rc genhtml_function_coverage=1 00:20:07.349 --rc genhtml_legend=1 00:20:07.349 --rc geninfo_all_blocks=1 00:20:07.349 --rc geninfo_unexecuted_blocks=1 00:20:07.349 00:20:07.349 ' 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.349 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.350 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d96d16b461064e1493d1236a1fc0b93d 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:07.350 Cannot find device "nvmf_init_br" 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:07.350 Cannot find device "nvmf_init_br2" 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:07.350 Cannot find device "nvmf_tgt_br" 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.350 Cannot find device "nvmf_tgt_br2" 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:07.350 Cannot find device "nvmf_init_br" 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:07.350 Cannot find device "nvmf_init_br2" 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:07.350 Cannot find device "nvmf_tgt_br" 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:07.350 Cannot find device "nvmf_tgt_br2" 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:20:07.350 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:07.609 Cannot find device "nvmf_br" 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:07.609 Cannot find device "nvmf_init_if" 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:07.609 Cannot find device "nvmf_init_if2" 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.609 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.609 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:07.609 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:07.868 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:07.869 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:07.869 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:20:07.869 00:20:07.869 --- 10.0.0.3 ping statistics --- 00:20:07.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.869 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:07.869 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:07.869 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:20:07.869 00:20:07.869 --- 10.0.0.4 ping statistics --- 00:20:07.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.869 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:07.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:07.869 00:20:07.869 --- 10.0.0.1 ping statistics --- 00:20:07.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.869 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:07.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:20:07.869 00:20:07.869 --- 10.0.0.2 ping statistics --- 00:20:07.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.869 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=87286 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 87286 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 87286 ']' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.869 13:49:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:07.869 [2024-11-06 13:49:48.793490] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:07.869 [2024-11-06 13:49:48.793559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.128 [2024-11-06 13:49:48.945688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.128 [2024-11-06 13:49:49.000288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.128 [2024-11-06 13:49:49.000346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.129 [2024-11-06 13:49:49.000355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.129 [2024-11-06 13:49:49.000363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.129 [2024-11-06 13:49:49.000370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.129 [2024-11-06 13:49:49.000737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.066 [2024-11-06 13:49:49.734033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.066 null0 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d96d16b461064e1493d1236a1fc0b93d 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.066 [2024-11-06 13:49:49.774132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.066 13:49:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.325 nvme0n1 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.325 [ 00:20:09.325 { 00:20:09.325 "aliases": [ 00:20:09.325 "d96d16b4-6106-4e14-93d1-236a1fc0b93d" 00:20:09.325 ], 00:20:09.325 "assigned_rate_limits": { 00:20:09.325 "r_mbytes_per_sec": 0, 00:20:09.325 "rw_ios_per_sec": 0, 00:20:09.325 "rw_mbytes_per_sec": 0, 00:20:09.325 "w_mbytes_per_sec": 0 00:20:09.325 }, 00:20:09.325 "block_size": 512, 00:20:09.325 "claimed": false, 00:20:09.325 "driver_specific": { 00:20:09.325 "mp_policy": "active_passive", 00:20:09.325 "nvme": [ 00:20:09.325 { 00:20:09.325 "ctrlr_data": { 00:20:09.325 "ana_reporting": false, 00:20:09.325 "cntlid": 1, 00:20:09.325 "firmware_revision": "25.01", 00:20:09.325 "model_number": "SPDK bdev Controller", 00:20:09.325 "multi_ctrlr": true, 00:20:09.325 "oacs": { 00:20:09.325 "firmware": 0, 00:20:09.325 "format": 0, 00:20:09.325 "ns_manage": 0, 00:20:09.325 "security": 0 00:20:09.325 }, 00:20:09.325 "serial_number": "00000000000000000000", 00:20:09.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.325 "vendor_id": "0x8086" 00:20:09.325 }, 00:20:09.325 "ns_data": { 00:20:09.325 "can_share": true, 00:20:09.325 "id": 1 00:20:09.325 }, 00:20:09.325 "trid": { 00:20:09.325 "adrfam": "IPv4", 00:20:09.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.325 "traddr": "10.0.0.3", 00:20:09.325 "trsvcid": "4420", 00:20:09.325 "trtype": "TCP" 00:20:09.325 }, 00:20:09.325 "vs": { 00:20:09.325 "nvme_version": "1.3" 00:20:09.325 } 00:20:09.325 } 00:20:09.325 ] 00:20:09.325 }, 00:20:09.325 "memory_domains": [ 00:20:09.325 { 00:20:09.325 "dma_device_id": "system", 00:20:09.325 "dma_device_type": 1 00:20:09.325 } 00:20:09.325 ], 00:20:09.325 "name": "nvme0n1", 00:20:09.325 "num_blocks": 2097152, 00:20:09.325 "numa_id": -1, 00:20:09.325 "product_name": "NVMe disk", 00:20:09.325 "supported_io_types": { 00:20:09.325 "abort": true, 00:20:09.325 "compare": true, 00:20:09.325 "compare_and_write": true, 00:20:09.325 "copy": true, 00:20:09.325 "flush": true, 00:20:09.325 "get_zone_info": false, 00:20:09.325 "nvme_admin": true, 00:20:09.325 "nvme_io": true, 00:20:09.325 "nvme_io_md": false, 00:20:09.325 "nvme_iov_md": false, 00:20:09.325 "read": true, 00:20:09.325 "reset": true, 00:20:09.325 "seek_data": false, 00:20:09.325 "seek_hole": false, 00:20:09.325 "unmap": false, 00:20:09.325 "write": true, 00:20:09.325 "write_zeroes": true, 00:20:09.325 "zcopy": false, 00:20:09.325 "zone_append": false, 00:20:09.325 "zone_management": false 00:20:09.325 }, 00:20:09.325 "uuid": "d96d16b4-6106-4e14-93d1-236a1fc0b93d", 00:20:09.325 "zoned": false 00:20:09.325 } 00:20:09.325 ] 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.325 [2024-11-06 13:49:50.050180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:09.325 [2024-11-06 13:49:50.050269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b84f0 (9): Bad file descriptor 00:20:09.325 [2024-11-06 13:49:50.181977] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.325 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.325 [ 00:20:09.325 { 00:20:09.325 "aliases": [ 00:20:09.325 "d96d16b4-6106-4e14-93d1-236a1fc0b93d" 00:20:09.325 ], 00:20:09.325 "assigned_rate_limits": { 00:20:09.325 "r_mbytes_per_sec": 0, 00:20:09.325 "rw_ios_per_sec": 0, 00:20:09.325 "rw_mbytes_per_sec": 0, 00:20:09.325 "w_mbytes_per_sec": 0 00:20:09.325 }, 00:20:09.325 "block_size": 512, 00:20:09.325 "claimed": false, 00:20:09.325 "driver_specific": { 00:20:09.325 "mp_policy": "active_passive", 00:20:09.325 "nvme": [ 00:20:09.325 { 00:20:09.325 "ctrlr_data": { 00:20:09.325 "ana_reporting": false, 00:20:09.325 "cntlid": 2, 00:20:09.325 "firmware_revision": "25.01", 00:20:09.325 "model_number": "SPDK bdev Controller", 00:20:09.325 "multi_ctrlr": true, 00:20:09.325 "oacs": { 00:20:09.325 "firmware": 0, 00:20:09.325 "format": 0, 00:20:09.325 "ns_manage": 0, 00:20:09.325 "security": 0 00:20:09.325 }, 00:20:09.325 "serial_number": "00000000000000000000", 00:20:09.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.325 "vendor_id": "0x8086" 00:20:09.325 }, 00:20:09.325 "ns_data": { 00:20:09.325 "can_share": true, 00:20:09.325 "id": 1 00:20:09.325 }, 00:20:09.325 "trid": { 00:20:09.325 "adrfam": "IPv4", 00:20:09.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.325 "traddr": "10.0.0.3", 00:20:09.325 "trsvcid": "4420", 00:20:09.325 "trtype": "TCP" 00:20:09.325 }, 00:20:09.326 "vs": { 00:20:09.326 "nvme_version": "1.3" 00:20:09.326 } 00:20:09.326 } 00:20:09.326 ] 00:20:09.326 }, 00:20:09.326 "memory_domains": [ 00:20:09.326 { 00:20:09.326 "dma_device_id": "system", 00:20:09.326 "dma_device_type": 1 00:20:09.326 } 00:20:09.326 ], 00:20:09.326 "name": "nvme0n1", 00:20:09.326 "num_blocks": 2097152, 00:20:09.326 "numa_id": -1, 00:20:09.326 "product_name": "NVMe disk", 00:20:09.326 "supported_io_types": { 00:20:09.326 "abort": true, 00:20:09.326 "compare": true, 00:20:09.326 "compare_and_write": true, 00:20:09.326 "copy": true, 00:20:09.326 "flush": true, 00:20:09.326 "get_zone_info": false, 00:20:09.326 "nvme_admin": true, 00:20:09.326 "nvme_io": true, 00:20:09.326 "nvme_io_md": false, 00:20:09.326 "nvme_iov_md": false, 00:20:09.326 "read": true, 00:20:09.326 "reset": true, 00:20:09.326 "seek_data": false, 00:20:09.326 "seek_hole": false, 00:20:09.326 "unmap": false, 00:20:09.326 "write": true, 00:20:09.326 "write_zeroes": true, 00:20:09.326 "zcopy": false, 00:20:09.326 "zone_append": false, 00:20:09.326 "zone_management": false 00:20:09.326 }, 00:20:09.326 "uuid": "d96d16b4-6106-4e14-93d1-236a1fc0b93d", 00:20:09.326 "zoned": false 00:20:09.326 } 00:20:09.326 ] 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rbtsdjdDd9 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rbtsdjdDd9 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.rbtsdjdDd9 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.326 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.585 [2024-11-06 13:49:50.273964] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.585 [2024-11-06 13:49:50.274083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.585 [2024-11-06 13:49:50.298013] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.585 nvme0n1 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:09.585 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.586 [ 00:20:09.586 { 00:20:09.586 "aliases": [ 00:20:09.586 "d96d16b4-6106-4e14-93d1-236a1fc0b93d" 00:20:09.586 ], 00:20:09.586 "assigned_rate_limits": { 00:20:09.586 "r_mbytes_per_sec": 0, 00:20:09.586 "rw_ios_per_sec": 0, 00:20:09.586 "rw_mbytes_per_sec": 0, 00:20:09.586 "w_mbytes_per_sec": 0 00:20:09.586 }, 00:20:09.586 "block_size": 512, 00:20:09.586 "claimed": false, 00:20:09.586 "driver_specific": { 00:20:09.586 "mp_policy": "active_passive", 00:20:09.586 "nvme": [ 00:20:09.586 { 00:20:09.586 "ctrlr_data": { 00:20:09.586 "ana_reporting": false, 00:20:09.586 "cntlid": 3, 00:20:09.586 "firmware_revision": "25.01", 00:20:09.586 "model_number": "SPDK bdev Controller", 00:20:09.586 "multi_ctrlr": true, 00:20:09.586 "oacs": { 00:20:09.586 "firmware": 0, 00:20:09.586 "format": 0, 00:20:09.586 "ns_manage": 0, 00:20:09.586 "security": 0 00:20:09.586 }, 00:20:09.586 "serial_number": "00000000000000000000", 00:20:09.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.586 "vendor_id": "0x8086" 00:20:09.586 }, 00:20:09.586 "ns_data": { 00:20:09.586 "can_share": true, 00:20:09.586 "id": 1 00:20:09.586 }, 00:20:09.586 "trid": { 00:20:09.586 "adrfam": "IPv4", 00:20:09.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.586 "traddr": "10.0.0.3", 00:20:09.586 "trsvcid": "4421", 00:20:09.586 "trtype": "TCP" 00:20:09.586 }, 00:20:09.586 "vs": { 00:20:09.586 "nvme_version": "1.3" 00:20:09.586 } 00:20:09.586 } 00:20:09.586 ] 00:20:09.586 }, 00:20:09.586 "memory_domains": [ 00:20:09.586 { 00:20:09.586 "dma_device_id": "system", 00:20:09.586 "dma_device_type": 1 00:20:09.586 } 00:20:09.586 ], 00:20:09.586 "name": "nvme0n1", 00:20:09.586 "num_blocks": 2097152, 00:20:09.586 "numa_id": -1, 00:20:09.586 "product_name": "NVMe disk", 00:20:09.586 "supported_io_types": { 00:20:09.586 "abort": true, 00:20:09.586 "compare": true, 00:20:09.586 "compare_and_write": true, 00:20:09.586 "copy": true, 00:20:09.586 "flush": true, 00:20:09.586 "get_zone_info": false, 00:20:09.586 "nvme_admin": true, 00:20:09.586 "nvme_io": true, 00:20:09.586 "nvme_io_md": false, 00:20:09.586 "nvme_iov_md": false, 00:20:09.586 "read": true, 00:20:09.586 "reset": true, 00:20:09.586 "seek_data": false, 00:20:09.586 "seek_hole": false, 00:20:09.586 "unmap": false, 00:20:09.586 "write": true, 00:20:09.586 "write_zeroes": true, 00:20:09.586 "zcopy": false, 00:20:09.586 "zone_append": false, 00:20:09.586 "zone_management": false 00:20:09.586 }, 00:20:09.586 "uuid": "d96d16b4-6106-4e14-93d1-236a1fc0b93d", 00:20:09.586 "zoned": false 00:20:09.586 } 00:20:09.586 ] 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.rbtsdjdDd9 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.586 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.586 rmmod nvme_tcp 00:20:09.845 rmmod nvme_fabrics 00:20:09.845 rmmod nvme_keyring 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 87286 ']' 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 87286 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 87286 ']' 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 87286 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87286 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:09.845 killing process with pid 87286 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87286' 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 87286 00:20:09.845 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 87286 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:10.105 13:49:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:10.105 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:10.105 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:20:10.365 00:20:10.365 real 0m3.314s 00:20:10.365 user 0m2.499s 00:20:10.365 sys 0m1.081s 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.365 ************************************ 00:20:10.365 END TEST nvmf_async_init 00:20:10.365 ************************************ 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.365 ************************************ 00:20:10.365 START TEST dma 00:20:10.365 ************************************ 00:20:10.365 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:10.626 * Looking for test storage... 00:20:10.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:10.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.626 --rc genhtml_branch_coverage=1 00:20:10.626 --rc genhtml_function_coverage=1 00:20:10.626 --rc genhtml_legend=1 00:20:10.626 --rc geninfo_all_blocks=1 00:20:10.626 --rc geninfo_unexecuted_blocks=1 00:20:10.626 00:20:10.626 ' 00:20:10.626 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:10.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.626 --rc genhtml_branch_coverage=1 00:20:10.626 --rc genhtml_function_coverage=1 00:20:10.626 --rc genhtml_legend=1 00:20:10.626 --rc geninfo_all_blocks=1 00:20:10.626 --rc geninfo_unexecuted_blocks=1 00:20:10.626 00:20:10.626 ' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.627 --rc genhtml_branch_coverage=1 00:20:10.627 --rc genhtml_function_coverage=1 00:20:10.627 --rc genhtml_legend=1 00:20:10.627 --rc geninfo_all_blocks=1 00:20:10.627 --rc geninfo_unexecuted_blocks=1 00:20:10.627 00:20:10.627 ' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.627 --rc genhtml_branch_coverage=1 00:20:10.627 --rc genhtml_function_coverage=1 00:20:10.627 --rc genhtml_legend=1 00:20:10.627 --rc geninfo_all_blocks=1 00:20:10.627 --rc geninfo_unexecuted_blocks=1 00:20:10.627 00:20:10.627 ' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:10.627 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:10.627 00:20:10.627 real 0m0.264s 00:20:10.627 user 0m0.140s 00:20:10.627 sys 0m0.142s 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:10.627 ************************************ 00:20:10.627 END TEST dma 00:20:10.627 ************************************ 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.627 ************************************ 00:20:10.627 START TEST nvmf_identify 00:20:10.627 ************************************ 00:20:10.627 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:10.888 * Looking for test storage... 00:20:10.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:10.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.888 --rc genhtml_branch_coverage=1 00:20:10.888 --rc genhtml_function_coverage=1 00:20:10.888 --rc genhtml_legend=1 00:20:10.888 --rc geninfo_all_blocks=1 00:20:10.888 --rc geninfo_unexecuted_blocks=1 00:20:10.888 00:20:10.888 ' 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:10.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.888 --rc genhtml_branch_coverage=1 00:20:10.888 --rc genhtml_function_coverage=1 00:20:10.888 --rc genhtml_legend=1 00:20:10.888 --rc geninfo_all_blocks=1 00:20:10.888 --rc geninfo_unexecuted_blocks=1 00:20:10.888 00:20:10.888 ' 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:10.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.888 --rc genhtml_branch_coverage=1 00:20:10.888 --rc genhtml_function_coverage=1 00:20:10.888 --rc genhtml_legend=1 00:20:10.888 --rc geninfo_all_blocks=1 00:20:10.888 --rc geninfo_unexecuted_blocks=1 00:20:10.888 00:20:10.888 ' 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:10.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.888 --rc genhtml_branch_coverage=1 00:20:10.888 --rc genhtml_function_coverage=1 00:20:10.888 --rc genhtml_legend=1 00:20:10.888 --rc geninfo_all_blocks=1 00:20:10.888 --rc geninfo_unexecuted_blocks=1 00:20:10.888 00:20:10.888 ' 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.888 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.889 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:11.149 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:11.149 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:11.150 Cannot find device "nvmf_init_br" 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:11.150 Cannot find device "nvmf_init_br2" 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:11.150 Cannot find device "nvmf_tgt_br" 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.150 Cannot find device "nvmf_tgt_br2" 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:11.150 Cannot find device "nvmf_init_br" 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:11.150 Cannot find device "nvmf_init_br2" 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:11.150 Cannot find device "nvmf_tgt_br" 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:11.150 Cannot find device "nvmf_tgt_br2" 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:11.150 13:49:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:11.150 Cannot find device "nvmf_br" 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:11.150 Cannot find device "nvmf_init_if" 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:11.150 Cannot find device "nvmf_init_if2" 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:11.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:11.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:11.150 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:11.410 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:11.670 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:11.670 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:20:11.670 00:20:11.670 --- 10.0.0.3 ping statistics --- 00:20:11.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.670 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:11.670 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:11.670 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:20:11.670 00:20:11.670 --- 10.0.0.4 ping statistics --- 00:20:11.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.670 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:11.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:20:11.670 00:20:11.670 --- 10.0.0.1 ping statistics --- 00:20:11.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.670 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:11.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:20:11.670 00:20:11.670 --- 10.0.0.2 ping statistics --- 00:20:11.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.670 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.670 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87622 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87622 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 87622 ']' 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:11.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:11.671 13:49:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:11.671 [2024-11-06 13:49:52.496451] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:11.671 [2024-11-06 13:49:52.496517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.931 [2024-11-06 13:49:52.650305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.931 [2024-11-06 13:49:52.711803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.931 [2024-11-06 13:49:52.711871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.931 [2024-11-06 13:49:52.711882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.931 [2024-11-06 13:49:52.711906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.931 [2024-11-06 13:49:52.711914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.931 [2024-11-06 13:49:52.713239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.931 [2024-11-06 13:49:52.713435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.931 [2024-11-06 13:49:52.713528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.931 [2024-11-06 13:49:52.713531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:12.500 [2024-11-06 13:49:53.397822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.500 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 Malloc0 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 [2024-11-06 13:49:53.539228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 [ 00:20:12.760 { 00:20:12.760 "allow_any_host": true, 00:20:12.760 "hosts": [], 00:20:12.760 "listen_addresses": [ 00:20:12.760 { 00:20:12.760 "adrfam": "IPv4", 00:20:12.760 "traddr": "10.0.0.3", 00:20:12.760 "trsvcid": "4420", 00:20:12.760 "trtype": "TCP" 00:20:12.760 } 00:20:12.760 ], 00:20:12.760 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:12.760 "subtype": "Discovery" 00:20:12.760 }, 00:20:12.760 { 00:20:12.760 "allow_any_host": true, 00:20:12.760 "hosts": [], 00:20:12.760 "listen_addresses": [ 00:20:12.760 { 00:20:12.760 "adrfam": "IPv4", 00:20:12.760 "traddr": "10.0.0.3", 00:20:12.760 "trsvcid": "4420", 00:20:12.760 "trtype": "TCP" 00:20:12.760 } 00:20:12.760 ], 00:20:12.760 "max_cntlid": 65519, 00:20:12.760 "max_namespaces": 32, 00:20:12.760 "min_cntlid": 1, 00:20:12.760 "model_number": "SPDK bdev Controller", 00:20:12.760 "namespaces": [ 00:20:12.760 { 00:20:12.760 "bdev_name": "Malloc0", 00:20:12.760 "eui64": "ABCDEF0123456789", 00:20:12.760 "name": "Malloc0", 00:20:12.760 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:12.760 "nsid": 1, 00:20:12.760 "uuid": "a8ab0580-55d0-4a50-bc49-6ac25281caf8" 00:20:12.760 } 00:20:12.760 ], 00:20:12.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.760 "serial_number": "SPDK00000000000001", 00:20:12.760 "subtype": "NVMe" 00:20:12.760 } 00:20:12.760 ] 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.760 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:12.760 [2024-11-06 13:49:53.616533] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:12.760 [2024-11-06 13:49:53.616583] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87676 ] 00:20:13.024 [2024-11-06 13:49:53.765675] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:13.024 [2024-11-06 13:49:53.765726] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:13.024 [2024-11-06 13:49:53.765732] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:13.024 [2024-11-06 13:49:53.765745] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:13.024 [2024-11-06 13:49:53.765755] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:13.024 [2024-11-06 13:49:53.770031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:13.024 [2024-11-06 13:49:53.770099] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2104d90 0 00:20:13.024 [2024-11-06 13:49:53.777894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:13.024 [2024-11-06 13:49:53.777914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:13.024 [2024-11-06 13:49:53.777919] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:13.024 [2024-11-06 13:49:53.777923] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:13.024 [2024-11-06 13:49:53.777954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.024 [2024-11-06 13:49:53.777960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.024 [2024-11-06 13:49:53.777964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.024 [2024-11-06 13:49:53.777976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:13.024 [2024-11-06 13:49:53.778002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.024 [2024-11-06 13:49:53.785890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.024 [2024-11-06 13:49:53.785905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.024 [2024-11-06 13:49:53.785910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.024 [2024-11-06 13:49:53.785914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.024 [2024-11-06 13:49:53.785927] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:13.024 [2024-11-06 13:49:53.785935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:13.024 [2024-11-06 13:49:53.785941] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:13.024 [2024-11-06 13:49:53.785955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.024 [2024-11-06 13:49:53.785959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.024 [2024-11-06 13:49:53.785964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.024 [2024-11-06 13:49:53.785972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.024 [2024-11-06 13:49:53.785993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.024 [2024-11-06 13:49:53.786049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.024 [2024-11-06 13:49:53.786055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.024 [2024-11-06 13:49:53.786058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.024 [2024-11-06 13:49:53.786062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.024 [2024-11-06 13:49:53.786068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:13.024 [2024-11-06 13:49:53.786075] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:13.024 [2024-11-06 13:49:53.786082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.024 [2024-11-06 13:49:53.786086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.024 [2024-11-06 13:49:53.786090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.024 [2024-11-06 13:49:53.786096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.024 [2024-11-06 13:49:53.786110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.024 [2024-11-06 13:49:53.786152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.024 [2024-11-06 13:49:53.786157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.025 [2024-11-06 13:49:53.786161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.025 [2024-11-06 13:49:53.786170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:13.025 [2024-11-06 13:49:53.786178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:13.025 [2024-11-06 13:49:53.786185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.786198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.025 [2024-11-06 13:49:53.786212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.025 [2024-11-06 13:49:53.786254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.025 [2024-11-06 13:49:53.786260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.025 [2024-11-06 13:49:53.786264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.025 [2024-11-06 13:49:53.786273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:13.025 [2024-11-06 13:49:53.786281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.786295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.025 [2024-11-06 13:49:53.786308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.025 [2024-11-06 13:49:53.786350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.025 [2024-11-06 13:49:53.786355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.025 [2024-11-06 13:49:53.786359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.025 [2024-11-06 13:49:53.786368] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:13.025 [2024-11-06 13:49:53.786373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:13.025 [2024-11-06 13:49:53.786381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:13.025 [2024-11-06 13:49:53.786490] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:13.025 [2024-11-06 13:49:53.786495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:13.025 [2024-11-06 13:49:53.786504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.786517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.025 [2024-11-06 13:49:53.786531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.025 [2024-11-06 13:49:53.786588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.025 [2024-11-06 13:49:53.786594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.025 [2024-11-06 13:49:53.786598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.025 [2024-11-06 13:49:53.786606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:13.025 [2024-11-06 13:49:53.786615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.786628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.025 [2024-11-06 13:49:53.786642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.025 [2024-11-06 13:49:53.786686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.025 [2024-11-06 13:49:53.786691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.025 [2024-11-06 13:49:53.786695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.025 [2024-11-06 13:49:53.786704] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:13.025 [2024-11-06 13:49:53.786709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:13.025 [2024-11-06 13:49:53.786716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:13.025 [2024-11-06 13:49:53.786731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:13.025 [2024-11-06 13:49:53.786740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.786750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.025 [2024-11-06 13:49:53.786764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.025 [2024-11-06 13:49:53.786833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.025 [2024-11-06 13:49:53.786838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.025 [2024-11-06 13:49:53.786842] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786846] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2104d90): datao=0, datal=4096, cccid=0 00:20:13.025 [2024-11-06 13:49:53.786852] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2145600) on tqpair(0x2104d90): expected_datao=0, payload_size=4096 00:20:13.025 [2024-11-06 13:49:53.786857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786873] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786878] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.025 [2024-11-06 13:49:53.786892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.025 [2024-11-06 13:49:53.786896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.025 [2024-11-06 13:49:53.786907] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:13.025 [2024-11-06 13:49:53.786913] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:13.025 [2024-11-06 13:49:53.786918] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:13.025 [2024-11-06 13:49:53.786923] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:13.025 [2024-11-06 13:49:53.786928] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:13.025 [2024-11-06 13:49:53.786933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:13.025 [2024-11-06 13:49:53.786946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:13.025 [2024-11-06 13:49:53.786953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.786961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.786968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:13.025 [2024-11-06 13:49:53.786983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.025 [2024-11-06 13:49:53.787033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.025 [2024-11-06 13:49:53.787039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.025 [2024-11-06 13:49:53.787042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.025 [2024-11-06 13:49:53.787054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.787067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.025 [2024-11-06 13:49:53.787073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.787086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.025 [2024-11-06 13:49:53.787092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.787105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.025 [2024-11-06 13:49:53.787110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.025 [2024-11-06 13:49:53.787118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.025 [2024-11-06 13:49:53.787123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.026 [2024-11-06 13:49:53.787128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:13.026 [2024-11-06 13:49:53.787139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:13.026 [2024-11-06 13:49:53.787155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2104d90) 00:20:13.026 [2024-11-06 13:49:53.787165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.026 [2024-11-06 13:49:53.787180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145600, cid 0, qid 0 00:20:13.026 [2024-11-06 13:49:53.787185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145780, cid 1, qid 0 00:20:13.026 [2024-11-06 13:49:53.787190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145900, cid 2, qid 0 00:20:13.026 [2024-11-06 13:49:53.787195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.026 [2024-11-06 13:49:53.787199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145c00, cid 4, qid 0 00:20:13.026 [2024-11-06 13:49:53.787271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.026 [2024-11-06 13:49:53.787277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.026 [2024-11-06 13:49:53.787280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145c00) on tqpair=0x2104d90 00:20:13.026 [2024-11-06 13:49:53.787290] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:13.026 [2024-11-06 13:49:53.787295] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:13.026 [2024-11-06 13:49:53.787305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2104d90) 00:20:13.026 [2024-11-06 13:49:53.787315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.026 [2024-11-06 13:49:53.787328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145c00, cid 4, qid 0 00:20:13.026 [2024-11-06 13:49:53.787377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.026 [2024-11-06 13:49:53.787383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.026 [2024-11-06 13:49:53.787387] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787390] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2104d90): datao=0, datal=4096, cccid=4 00:20:13.026 [2024-11-06 13:49:53.787395] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2145c00) on tqpair(0x2104d90): expected_datao=0, payload_size=4096 00:20:13.026 [2024-11-06 13:49:53.787400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787406] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787409] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.026 [2024-11-06 13:49:53.787422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.026 [2024-11-06 13:49:53.787427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145c00) on tqpair=0x2104d90 00:20:13.026 [2024-11-06 13:49:53.787442] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:13.026 [2024-11-06 13:49:53.787468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2104d90) 00:20:13.026 [2024-11-06 13:49:53.787478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.026 [2024-11-06 13:49:53.787485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2104d90) 00:20:13.026 [2024-11-06 13:49:53.787499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.026 [2024-11-06 13:49:53.787520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145c00, cid 4, qid 0 00:20:13.026 [2024-11-06 13:49:53.787525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145d80, cid 5, qid 0 00:20:13.026 [2024-11-06 13:49:53.787609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.026 [2024-11-06 13:49:53.787614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.026 [2024-11-06 13:49:53.787618] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787622] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2104d90): datao=0, datal=1024, cccid=4 00:20:13.026 [2024-11-06 13:49:53.787627] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2145c00) on tqpair(0x2104d90): expected_datao=0, payload_size=1024 00:20:13.026 [2024-11-06 13:49:53.787631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787638] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787641] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.026 [2024-11-06 13:49:53.787652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.026 [2024-11-06 13:49:53.787656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.787660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145d80) on tqpair=0x2104d90 00:20:13.026 [2024-11-06 13:49:53.829872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.026 [2024-11-06 13:49:53.829887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.026 [2024-11-06 13:49:53.829891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.829895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145c00) on tqpair=0x2104d90 00:20:13.026 [2024-11-06 13:49:53.829906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.829927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2104d90) 00:20:13.026 [2024-11-06 13:49:53.829934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.026 [2024-11-06 13:49:53.829958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145c00, cid 4, qid 0 00:20:13.026 [2024-11-06 13:49:53.830009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.026 [2024-11-06 13:49:53.830015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.026 [2024-11-06 13:49:53.830019] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830022] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2104d90): datao=0, datal=3072, cccid=4 00:20:13.026 [2024-11-06 13:49:53.830027] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2145c00) on tqpair(0x2104d90): expected_datao=0, payload_size=3072 00:20:13.026 [2024-11-06 13:49:53.830032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830039] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830042] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.026 [2024-11-06 13:49:53.830055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.026 [2024-11-06 13:49:53.830059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145c00) on tqpair=0x2104d90 00:20:13.026 [2024-11-06 13:49:53.830071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2104d90) 00:20:13.026 [2024-11-06 13:49:53.830081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.026 [2024-11-06 13:49:53.830100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145c00, cid 4, qid 0 00:20:13.026 [2024-11-06 13:49:53.830155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.026 [2024-11-06 13:49:53.830160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.026 [2024-11-06 13:49:53.830164] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830168] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2104d90): datao=0, datal=8, cccid=4 00:20:13.026 [2024-11-06 13:49:53.830173] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2145c00) on tqpair(0x2104d90): expected_datao=0, payload_size=8 00:20:13.026 [2024-11-06 13:49:53.830177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830183] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.026 [2024-11-06 13:49:53.830187] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.026 ===================================================== 00:20:13.026 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:13.026 ===================================================== 00:20:13.026 Controller Capabilities/Features 00:20:13.026 ================================ 00:20:13.026 Vendor ID: 0000 00:20:13.026 Subsystem Vendor ID: 0000 00:20:13.026 Serial Number: .................... 00:20:13.026 Model Number: ........................................ 00:20:13.026 Firmware Version: 25.01 00:20:13.026 Recommended Arb Burst: 0 00:20:13.026 IEEE OUI Identifier: 00 00 00 00:20:13.026 Multi-path I/O 00:20:13.026 May have multiple subsystem ports: No 00:20:13.026 May have multiple controllers: No 00:20:13.026 Associated with SR-IOV VF: No 00:20:13.026 Max Data Transfer Size: 131072 00:20:13.026 Max Number of Namespaces: 0 00:20:13.026 Max Number of I/O Queues: 1024 00:20:13.026 NVMe Specification Version (VS): 1.3 00:20:13.026 NVMe Specification Version (Identify): 1.3 00:20:13.026 Maximum Queue Entries: 128 00:20:13.026 Contiguous Queues Required: Yes 00:20:13.026 Arbitration Mechanisms Supported 00:20:13.026 Weighted Round Robin: Not Supported 00:20:13.026 Vendor Specific: Not Supported 00:20:13.026 Reset Timeout: 15000 ms 00:20:13.026 Doorbell Stride: 4 bytes 00:20:13.027 NVM Subsystem Reset: Not Supported 00:20:13.027 Command Sets Supported 00:20:13.027 NVM Command Set: Supported 00:20:13.027 Boot Partition: Not Supported 00:20:13.027 Memory Page Size Minimum: 4096 bytes 00:20:13.027 Memory Page Size Maximum: 4096 bytes 00:20:13.027 Persistent Memory Region: Not Supported 00:20:13.027 Optional Asynchronous Events Supported 00:20:13.027 Namespace Attribute Notices: Not Supported 00:20:13.027 Firmware Activation Notices: Not Supported 00:20:13.027 ANA Change Notices: Not Supported 00:20:13.027 PLE Aggregate Log Change Notices: Not Supported 00:20:13.027 LBA Status Info Alert Notices: Not Supported 00:20:13.027 EGE Aggregate Log Change Notices: Not Supported 00:20:13.027 Normal NVM Subsystem Shutdown event: Not Supported 00:20:13.027 Zone Descriptor Change Notices: Not Supported 00:20:13.027 Discovery Log Change Notices: Supported 00:20:13.027 Controller Attributes 00:20:13.027 128-bit Host Identifier: Not Supported 00:20:13.027 Non-Operational Permissive Mode: Not Supported 00:20:13.027 NVM Sets: Not Supported 00:20:13.027 Read Recovery Levels: Not Supported 00:20:13.027 Endurance Groups: Not Supported 00:20:13.027 Predictable Latency Mode: Not Supported 00:20:13.027 Traffic Based Keep ALive: Not Supported 00:20:13.027 Namespace Granularity: Not Supported 00:20:13.027 SQ Associations: Not Supported 00:20:13.027 UUID List: Not Supported 00:20:13.027 Multi-Domain Subsystem: Not Supported 00:20:13.027 Fixed Capacity Management: Not Supported 00:20:13.027 Variable Capacity Management: Not Supported 00:20:13.027 Delete Endurance Group: Not Supported 00:20:13.027 Delete NVM Set: Not Supported 00:20:13.027 Extended LBA Formats Supported: Not Supported 00:20:13.027 Flexible Data Placement Supported: Not Supported 00:20:13.027 00:20:13.027 Controller Memory Buffer Support 00:20:13.027 ================================ 00:20:13.027 Supported: No 00:20:13.027 00:20:13.027 Persistent Memory Region Support 00:20:13.027 ================================ 00:20:13.027 Supported: No 00:20:13.027 00:20:13.027 Admin Command Set Attributes 00:20:13.027 ============================ 00:20:13.027 Security Send/Receive: Not Supported 00:20:13.027 Format NVM: Not Supported 00:20:13.027 Firmware Activate/Download: Not Supported 00:20:13.027 Namespace Management: Not Supported 00:20:13.027 Device Self-Test: Not Supported 00:20:13.027 Directives: Not Supported 00:20:13.027 NVMe-MI: Not Supported 00:20:13.027 Virtualization Management: Not Supported 00:20:13.027 Doorbell Buffer Config: Not Supported 00:20:13.027 Get LBA Status Capability: Not Supported 00:20:13.027 Command & Feature Lockdown Capability: Not Supported 00:20:13.027 Abort Command Limit: 1 00:20:13.027 Async Event Request Limit: 4 00:20:13.027 Number of Firmware Slots: N/A 00:20:13.027 Firmware Slot 1 Read-Only: N/A 00:20:13.027 Firm[2024-11-06 13:49:53.870923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.027 [2024-11-06 13:49:53.870938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.027 [2024-11-06 13:49:53.870942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.027 [2024-11-06 13:49:53.870946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145c00) on tqpair=0x2104d90 00:20:13.027 ware Activation Without Reset: N/A 00:20:13.027 Multiple Update Detection Support: N/A 00:20:13.027 Firmware Update Granularity: No Information Provided 00:20:13.027 Per-Namespace SMART Log: No 00:20:13.027 Asymmetric Namespace Access Log Page: Not Supported 00:20:13.027 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:13.027 Command Effects Log Page: Not Supported 00:20:13.027 Get Log Page Extended Data: Supported 00:20:13.027 Telemetry Log Pages: Not Supported 00:20:13.027 Persistent Event Log Pages: Not Supported 00:20:13.027 Supported Log Pages Log Page: May Support 00:20:13.027 Commands Supported & Effects Log Page: Not Supported 00:20:13.027 Feature Identifiers & Effects Log Page:May Support 00:20:13.027 NVMe-MI Commands & Effects Log Page: May Support 00:20:13.027 Data Area 4 for Telemetry Log: Not Supported 00:20:13.027 Error Log Page Entries Supported: 128 00:20:13.027 Keep Alive: Not Supported 00:20:13.027 00:20:13.027 NVM Command Set Attributes 00:20:13.027 ========================== 00:20:13.027 Submission Queue Entry Size 00:20:13.027 Max: 1 00:20:13.027 Min: 1 00:20:13.027 Completion Queue Entry Size 00:20:13.027 Max: 1 00:20:13.027 Min: 1 00:20:13.027 Number of Namespaces: 0 00:20:13.027 Compare Command: Not Supported 00:20:13.027 Write Uncorrectable Command: Not Supported 00:20:13.027 Dataset Management Command: Not Supported 00:20:13.027 Write Zeroes Command: Not Supported 00:20:13.027 Set Features Save Field: Not Supported 00:20:13.027 Reservations: Not Supported 00:20:13.027 Timestamp: Not Supported 00:20:13.027 Copy: Not Supported 00:20:13.027 Volatile Write Cache: Not Present 00:20:13.027 Atomic Write Unit (Normal): 1 00:20:13.027 Atomic Write Unit (PFail): 1 00:20:13.027 Atomic Compare & Write Unit: 1 00:20:13.027 Fused Compare & Write: Supported 00:20:13.027 Scatter-Gather List 00:20:13.027 SGL Command Set: Supported 00:20:13.027 SGL Keyed: Supported 00:20:13.027 SGL Bit Bucket Descriptor: Not Supported 00:20:13.027 SGL Metadata Pointer: Not Supported 00:20:13.027 Oversized SGL: Not Supported 00:20:13.027 SGL Metadata Address: Not Supported 00:20:13.027 SGL Offset: Supported 00:20:13.027 Transport SGL Data Block: Not Supported 00:20:13.027 Replay Protected Memory Block: Not Supported 00:20:13.027 00:20:13.027 Firmware Slot Information 00:20:13.027 ========================= 00:20:13.027 Active slot: 0 00:20:13.027 00:20:13.027 00:20:13.027 Error Log 00:20:13.027 ========= 00:20:13.027 00:20:13.027 Active Namespaces 00:20:13.027 ================= 00:20:13.027 Discovery Log Page 00:20:13.027 ================== 00:20:13.027 Generation Counter: 2 00:20:13.027 Number of Records: 2 00:20:13.027 Record Format: 0 00:20:13.027 00:20:13.027 Discovery Log Entry 0 00:20:13.027 ---------------------- 00:20:13.027 Transport Type: 3 (TCP) 00:20:13.027 Address Family: 1 (IPv4) 00:20:13.027 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:13.027 Entry Flags: 00:20:13.027 Duplicate Returned Information: 1 00:20:13.027 Explicit Persistent Connection Support for Discovery: 1 00:20:13.027 Transport Requirements: 00:20:13.027 Secure Channel: Not Required 00:20:13.027 Port ID: 0 (0x0000) 00:20:13.027 Controller ID: 65535 (0xffff) 00:20:13.027 Admin Max SQ Size: 128 00:20:13.027 Transport Service Identifier: 4420 00:20:13.027 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:13.027 Transport Address: 10.0.0.3 00:20:13.027 Discovery Log Entry 1 00:20:13.027 ---------------------- 00:20:13.027 Transport Type: 3 (TCP) 00:20:13.027 Address Family: 1 (IPv4) 00:20:13.027 Subsystem Type: 2 (NVM Subsystem) 00:20:13.027 Entry Flags: 00:20:13.027 Duplicate Returned Information: 0 00:20:13.027 Explicit Persistent Connection Support for Discovery: 0 00:20:13.027 Transport Requirements: 00:20:13.027 Secure Channel: Not Required 00:20:13.027 Port ID: 0 (0x0000) 00:20:13.027 Controller ID: 65535 (0xffff) 00:20:13.027 Admin Max SQ Size: 128 00:20:13.027 Transport Service Identifier: 4420 00:20:13.027 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:13.027 Transport Address: 10.0.0.3 [2024-11-06 13:49:53.871055] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:13.027 [2024-11-06 13:49:53.871068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145600) on tqpair=0x2104d90 00:20:13.027 [2024-11-06 13:49:53.871075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.027 [2024-11-06 13:49:53.871081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145780) on tqpair=0x2104d90 00:20:13.027 [2024-11-06 13:49:53.871085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.027 [2024-11-06 13:49:53.871090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145900) on tqpair=0x2104d90 00:20:13.027 [2024-11-06 13:49:53.871095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.027 [2024-11-06 13:49:53.871100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.027 [2024-11-06 13:49:53.871105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.027 [2024-11-06 13:49:53.871114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.027 [2024-11-06 13:49:53.871118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.027 [2024-11-06 13:49:53.871122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.027 [2024-11-06 13:49:53.871129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.871235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.871249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.871345] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:13.028 [2024-11-06 13:49:53.871351] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:13.028 [2024-11-06 13:49:53.871359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.871373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.871451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.871465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.871542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.871555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.871628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.871642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.871717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.871730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.871804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.871818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.871908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.871922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.871936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.871984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.871989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.871993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.871997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.872005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.872019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.872032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.872077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.872083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.872086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.872099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.872112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.872125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.872165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.872170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.872174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.872186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.028 [2024-11-06 13:49:53.872200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.028 [2024-11-06 13:49:53.872213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.028 [2024-11-06 13:49:53.872255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.028 [2024-11-06 13:49:53.872261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.028 [2024-11-06 13:49:53.872265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.028 [2024-11-06 13:49:53.872277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.028 [2024-11-06 13:49:53.872281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.872291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.872304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.872347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.872352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.872356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.872368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.872382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.872395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.872440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.872446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.872449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.872461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.872475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.872488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.872530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.872536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.872539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.872552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.872566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.872579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.872624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.872630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.872633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.872646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.872659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.872672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.872715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.872720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.872724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.872736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.872750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.872763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.872805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.872811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.872815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.872827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.872841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.872854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.872903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.872909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.872913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.872925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.872933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.872939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.872953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.872995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.873001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.873005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.873017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.873030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.873044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.873084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.873090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.873093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.873106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.029 [2024-11-06 13:49:53.873119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.029 [2024-11-06 13:49:53.873133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.029 [2024-11-06 13:49:53.873175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.029 [2024-11-06 13:49:53.873181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.029 [2024-11-06 13:49:53.873184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.029 [2024-11-06 13:49:53.873197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.029 [2024-11-06 13:49:53.873204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.873210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.873223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.873263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.873268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.873272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.873284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.873298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.873311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.873351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.873357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.873360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.873372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.873386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.873399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.873441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.873447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.873451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.873463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.873477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.873490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.873535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.873540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.873544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.873556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.873570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.873583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.873625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.873631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.873634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.873647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.873660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.873673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.873713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.873719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.873723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.873735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.873749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.873762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.873807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.873813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.873816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.873829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.873836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.873842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.873856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.877882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.877900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.877905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.877909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.877920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.877924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.877928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2104d90) 00:20:13.030 [2024-11-06 13:49:53.877934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.030 [2024-11-06 13:49:53.877956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2145a80, cid 3, qid 0 00:20:13.030 [2024-11-06 13:49:53.877997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.030 [2024-11-06 13:49:53.878003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.030 [2024-11-06 13:49:53.878007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.030 [2024-11-06 13:49:53.878011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2145a80) on tqpair=0x2104d90 00:20:13.030 [2024-11-06 13:49:53.878018] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:20:13.030 00:20:13.030 13:49:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:13.030 [2024-11-06 13:49:53.922037] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:13.030 [2024-11-06 13:49:53.922089] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87678 ] 00:20:13.296 [2024-11-06 13:49:54.069300] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:13.296 [2024-11-06 13:49:54.069352] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:13.296 [2024-11-06 13:49:54.069358] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:13.296 [2024-11-06 13:49:54.069372] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:13.296 [2024-11-06 13:49:54.069382] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:13.296 [2024-11-06 13:49:54.069642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:13.296 [2024-11-06 13:49:54.069687] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10aad90 0 00:20:13.296 [2024-11-06 13:49:54.081875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:13.296 [2024-11-06 13:49:54.081892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:13.296 [2024-11-06 13:49:54.081897] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:13.296 [2024-11-06 13:49:54.081901] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:13.296 [2024-11-06 13:49:54.081928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.296 [2024-11-06 13:49:54.081933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.296 [2024-11-06 13:49:54.081937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.296 [2024-11-06 13:49:54.081948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:13.296 [2024-11-06 13:49:54.081973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.296 [2024-11-06 13:49:54.089875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.296 [2024-11-06 13:49:54.089891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.296 [2024-11-06 13:49:54.089896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.296 [2024-11-06 13:49:54.089901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.296 [2024-11-06 13:49:54.089910] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:13.296 [2024-11-06 13:49:54.089917] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:13.296 [2024-11-06 13:49:54.089924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:13.296 [2024-11-06 13:49:54.089937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.296 [2024-11-06 13:49:54.089941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.296 [2024-11-06 13:49:54.089945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.296 [2024-11-06 13:49:54.089954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.296 [2024-11-06 13:49:54.089976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.296 [2024-11-06 13:49:54.090029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.296 [2024-11-06 13:49:54.090035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.296 [2024-11-06 13:49:54.090039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.296 [2024-11-06 13:49:54.090043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.296 [2024-11-06 13:49:54.090048] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:13.296 [2024-11-06 13:49:54.090056] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:13.296 [2024-11-06 13:49:54.090062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.296 [2024-11-06 13:49:54.090066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.296 [2024-11-06 13:49:54.090070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.296 [2024-11-06 13:49:54.090076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.297 [2024-11-06 13:49:54.090090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.297 [2024-11-06 13:49:54.090133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.297 [2024-11-06 13:49:54.090139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.297 [2024-11-06 13:49:54.090142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.297 [2024-11-06 13:49:54.090151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:13.297 [2024-11-06 13:49:54.090159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:13.297 [2024-11-06 13:49:54.090166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.090179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.297 [2024-11-06 13:49:54.090193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.297 [2024-11-06 13:49:54.090238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.297 [2024-11-06 13:49:54.090244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.297 [2024-11-06 13:49:54.090247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.297 [2024-11-06 13:49:54.090256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:13.297 [2024-11-06 13:49:54.090264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.090278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.297 [2024-11-06 13:49:54.090291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.297 [2024-11-06 13:49:54.090329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.297 [2024-11-06 13:49:54.090335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.297 [2024-11-06 13:49:54.090339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.297 [2024-11-06 13:49:54.090348] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:13.297 [2024-11-06 13:49:54.090353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:13.297 [2024-11-06 13:49:54.090361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:13.297 [2024-11-06 13:49:54.090469] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:13.297 [2024-11-06 13:49:54.090475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:13.297 [2024-11-06 13:49:54.090483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.090496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.297 [2024-11-06 13:49:54.090510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.297 [2024-11-06 13:49:54.090552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.297 [2024-11-06 13:49:54.090558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.297 [2024-11-06 13:49:54.090562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.297 [2024-11-06 13:49:54.090570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:13.297 [2024-11-06 13:49:54.090579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.090593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.297 [2024-11-06 13:49:54.090606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.297 [2024-11-06 13:49:54.090646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.297 [2024-11-06 13:49:54.090652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.297 [2024-11-06 13:49:54.090655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.297 [2024-11-06 13:49:54.090664] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:13.297 [2024-11-06 13:49:54.090669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:13.297 [2024-11-06 13:49:54.090677] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:13.297 [2024-11-06 13:49:54.090691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:13.297 [2024-11-06 13:49:54.090700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.090710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.297 [2024-11-06 13:49:54.090724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.297 [2024-11-06 13:49:54.090812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.297 [2024-11-06 13:49:54.090818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.297 [2024-11-06 13:49:54.090822] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090826] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10aad90): datao=0, datal=4096, cccid=0 00:20:13.297 [2024-11-06 13:49:54.090831] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10eb600) on tqpair(0x10aad90): expected_datao=0, payload_size=4096 00:20:13.297 [2024-11-06 13:49:54.090836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090843] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090847] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.297 [2024-11-06 13:49:54.090872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.297 [2024-11-06 13:49:54.090875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.297 [2024-11-06 13:49:54.090887] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:13.297 [2024-11-06 13:49:54.090892] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:13.297 [2024-11-06 13:49:54.090897] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:13.297 [2024-11-06 13:49:54.090902] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:13.297 [2024-11-06 13:49:54.090907] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:13.297 [2024-11-06 13:49:54.090912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:13.297 [2024-11-06 13:49:54.090924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:13.297 [2024-11-06 13:49:54.090930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.090938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.090945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:13.297 [2024-11-06 13:49:54.090959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.297 [2024-11-06 13:49:54.091007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.297 [2024-11-06 13:49:54.091013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.297 [2024-11-06 13:49:54.091017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.091021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.297 [2024-11-06 13:49:54.091027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.091031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.091035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.091040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.297 [2024-11-06 13:49:54.091046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.091050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.091054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.091059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.297 [2024-11-06 13:49:54.091065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.091069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.091073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10aad90) 00:20:13.297 [2024-11-06 13:49:54.091078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.297 [2024-11-06 13:49:54.091084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.297 [2024-11-06 13:49:54.091088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.298 [2024-11-06 13:49:54.091097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.298 [2024-11-06 13:49:54.091102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10aad90) 00:20:13.298 [2024-11-06 13:49:54.091131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.298 [2024-11-06 13:49:54.091154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb600, cid 0, qid 0 00:20:13.298 [2024-11-06 13:49:54.091159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb780, cid 1, qid 0 00:20:13.298 [2024-11-06 13:49:54.091164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eb900, cid 2, qid 0 00:20:13.298 [2024-11-06 13:49:54.091169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.298 [2024-11-06 13:49:54.091173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebc00, cid 4, qid 0 00:20:13.298 [2024-11-06 13:49:54.091252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.298 [2024-11-06 13:49:54.091258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.298 [2024-11-06 13:49:54.091262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebc00) on tqpair=0x10aad90 00:20:13.298 [2024-11-06 13:49:54.091271] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:13.298 [2024-11-06 13:49:54.091277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10aad90) 00:20:13.298 [2024-11-06 13:49:54.091315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:13.298 [2024-11-06 13:49:54.091329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebc00, cid 4, qid 0 00:20:13.298 [2024-11-06 13:49:54.091373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.298 [2024-11-06 13:49:54.091378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.298 [2024-11-06 13:49:54.091382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebc00) on tqpair=0x10aad90 00:20:13.298 [2024-11-06 13:49:54.091437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10aad90) 00:20:13.298 [2024-11-06 13:49:54.091465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.298 [2024-11-06 13:49:54.091479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebc00, cid 4, qid 0 00:20:13.298 [2024-11-06 13:49:54.091528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.298 [2024-11-06 13:49:54.091534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.298 [2024-11-06 13:49:54.091537] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091541] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10aad90): datao=0, datal=4096, cccid=4 00:20:13.298 [2024-11-06 13:49:54.091546] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ebc00) on tqpair(0x10aad90): expected_datao=0, payload_size=4096 00:20:13.298 [2024-11-06 13:49:54.091551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091558] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091562] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.298 [2024-11-06 13:49:54.091575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.298 [2024-11-06 13:49:54.091578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebc00) on tqpair=0x10aad90 00:20:13.298 [2024-11-06 13:49:54.091595] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:13.298 [2024-11-06 13:49:54.091607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10aad90) 00:20:13.298 [2024-11-06 13:49:54.091633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.298 [2024-11-06 13:49:54.091647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebc00, cid 4, qid 0 00:20:13.298 [2024-11-06 13:49:54.091712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.298 [2024-11-06 13:49:54.091718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.298 [2024-11-06 13:49:54.091721] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091725] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10aad90): datao=0, datal=4096, cccid=4 00:20:13.298 [2024-11-06 13:49:54.091730] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ebc00) on tqpair(0x10aad90): expected_datao=0, payload_size=4096 00:20:13.298 [2024-11-06 13:49:54.091735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091740] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091744] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.298 [2024-11-06 13:49:54.091757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.298 [2024-11-06 13:49:54.091761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebc00) on tqpair=0x10aad90 00:20:13.298 [2024-11-06 13:49:54.091777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10aad90) 00:20:13.298 [2024-11-06 13:49:54.091803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.298 [2024-11-06 13:49:54.091816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebc00, cid 4, qid 0 00:20:13.298 [2024-11-06 13:49:54.091875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.298 [2024-11-06 13:49:54.091881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.298 [2024-11-06 13:49:54.091884] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091888] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10aad90): datao=0, datal=4096, cccid=4 00:20:13.298 [2024-11-06 13:49:54.091893] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ebc00) on tqpair(0x10aad90): expected_datao=0, payload_size=4096 00:20:13.298 [2024-11-06 13:49:54.091897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091904] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091907] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.298 [2024-11-06 13:49:54.091920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.298 [2024-11-06 13:49:54.091924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.091929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebc00) on tqpair=0x10aad90 00:20:13.298 [2024-11-06 13:49:54.091936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091978] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:13.298 [2024-11-06 13:49:54.091983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:13.298 [2024-11-06 13:49:54.091989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:13.298 [2024-11-06 13:49:54.092002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.298 [2024-11-06 13:49:54.092006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10aad90) 00:20:13.298 [2024-11-06 13:49:54.092012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.298 [2024-11-06 13:49:54.092019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10aad90) 00:20:13.299 [2024-11-06 13:49:54.092032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.299 [2024-11-06 13:49:54.092051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebc00, cid 4, qid 0 00:20:13.299 [2024-11-06 13:49:54.092057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebd80, cid 5, qid 0 00:20:13.299 [2024-11-06 13:49:54.092111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebc00) on tqpair=0x10aad90 00:20:13.299 [2024-11-06 13:49:54.092130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebd80) on tqpair=0x10aad90 00:20:13.299 [2024-11-06 13:49:54.092152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10aad90) 00:20:13.299 [2024-11-06 13:49:54.092162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.299 [2024-11-06 13:49:54.092175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebd80, cid 5, qid 0 00:20:13.299 [2024-11-06 13:49:54.092221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebd80) on tqpair=0x10aad90 00:20:13.299 [2024-11-06 13:49:54.092243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10aad90) 00:20:13.299 [2024-11-06 13:49:54.092253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.299 [2024-11-06 13:49:54.092266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebd80, cid 5, qid 0 00:20:13.299 [2024-11-06 13:49:54.092304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebd80) on tqpair=0x10aad90 00:20:13.299 [2024-11-06 13:49:54.092326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10aad90) 00:20:13.299 [2024-11-06 13:49:54.092336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.299 [2024-11-06 13:49:54.092349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebd80, cid 5, qid 0 00:20:13.299 [2024-11-06 13:49:54.092391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebd80) on tqpair=0x10aad90 00:20:13.299 [2024-11-06 13:49:54.092420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10aad90) 00:20:13.299 [2024-11-06 13:49:54.092430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.299 [2024-11-06 13:49:54.092437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10aad90) 00:20:13.299 [2024-11-06 13:49:54.092446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.299 [2024-11-06 13:49:54.092453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10aad90) 00:20:13.299 [2024-11-06 13:49:54.092462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.299 [2024-11-06 13:49:54.092470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10aad90) 00:20:13.299 [2024-11-06 13:49:54.092479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.299 [2024-11-06 13:49:54.092494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebd80, cid 5, qid 0 00:20:13.299 [2024-11-06 13:49:54.092499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebc00, cid 4, qid 0 00:20:13.299 [2024-11-06 13:49:54.092503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ebf00, cid 6, qid 0 00:20:13.299 [2024-11-06 13:49:54.092508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ec080, cid 7, qid 0 00:20:13.299 [2024-11-06 13:49:54.092618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.299 [2024-11-06 13:49:54.092624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.299 [2024-11-06 13:49:54.092628] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092631] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10aad90): datao=0, datal=8192, cccid=5 00:20:13.299 [2024-11-06 13:49:54.092636] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ebd80) on tqpair(0x10aad90): expected_datao=0, payload_size=8192 00:20:13.299 [2024-11-06 13:49:54.092641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092654] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092658] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.299 [2024-11-06 13:49:54.092669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.299 [2024-11-06 13:49:54.092673] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092676] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10aad90): datao=0, datal=512, cccid=4 00:20:13.299 [2024-11-06 13:49:54.092681] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ebc00) on tqpair(0x10aad90): expected_datao=0, payload_size=512 00:20:13.299 [2024-11-06 13:49:54.092686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092691] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092695] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.299 [2024-11-06 13:49:54.092706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.299 [2024-11-06 13:49:54.092709] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092713] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10aad90): datao=0, datal=512, cccid=6 00:20:13.299 [2024-11-06 13:49:54.092718] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ebf00) on tqpair(0x10aad90): expected_datao=0, payload_size=512 00:20:13.299 [2024-11-06 13:49:54.092722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092728] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092732] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:13.299 [2024-11-06 13:49:54.092742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:13.299 [2024-11-06 13:49:54.092746] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092750] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10aad90): datao=0, datal=4096, cccid=7 00:20:13.299 [2024-11-06 13:49:54.092755] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ec080) on tqpair(0x10aad90): expected_datao=0, payload_size=4096 00:20:13.299 [2024-11-06 13:49:54.092760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092766] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092770] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebd80) on tqpair=0x10aad90 00:20:13.299 [2024-11-06 13:49:54.092804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebc00) on tqpair=0x10aad90 00:20:13.299 [2024-11-06 13:49:54.092829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ebf00) on tqpair=0x10aad90 00:20:13.299 [2024-11-06 13:49:54.092849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.299 [2024-11-06 13:49:54.092855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.299 [2024-11-06 13:49:54.092868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.299 [2024-11-06 13:49:54.092872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ec080) on tqpair=0x10aad90 00:20:13.299 ===================================================== 00:20:13.299 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:13.299 ===================================================== 00:20:13.299 Controller Capabilities/Features 00:20:13.299 ================================ 00:20:13.299 Vendor ID: 8086 00:20:13.299 Subsystem Vendor ID: 8086 00:20:13.299 Serial Number: SPDK00000000000001 00:20:13.300 Model Number: SPDK bdev Controller 00:20:13.300 Firmware Version: 25.01 00:20:13.300 Recommended Arb Burst: 6 00:20:13.300 IEEE OUI Identifier: e4 d2 5c 00:20:13.300 Multi-path I/O 00:20:13.300 May have multiple subsystem ports: Yes 00:20:13.300 May have multiple controllers: Yes 00:20:13.300 Associated with SR-IOV VF: No 00:20:13.300 Max Data Transfer Size: 131072 00:20:13.300 Max Number of Namespaces: 32 00:20:13.300 Max Number of I/O Queues: 127 00:20:13.300 NVMe Specification Version (VS): 1.3 00:20:13.300 NVMe Specification Version (Identify): 1.3 00:20:13.300 Maximum Queue Entries: 128 00:20:13.300 Contiguous Queues Required: Yes 00:20:13.300 Arbitration Mechanisms Supported 00:20:13.300 Weighted Round Robin: Not Supported 00:20:13.300 Vendor Specific: Not Supported 00:20:13.300 Reset Timeout: 15000 ms 00:20:13.300 Doorbell Stride: 4 bytes 00:20:13.300 NVM Subsystem Reset: Not Supported 00:20:13.300 Command Sets Supported 00:20:13.300 NVM Command Set: Supported 00:20:13.300 Boot Partition: Not Supported 00:20:13.300 Memory Page Size Minimum: 4096 bytes 00:20:13.300 Memory Page Size Maximum: 4096 bytes 00:20:13.300 Persistent Memory Region: Not Supported 00:20:13.300 Optional Asynchronous Events Supported 00:20:13.300 Namespace Attribute Notices: Supported 00:20:13.300 Firmware Activation Notices: Not Supported 00:20:13.300 ANA Change Notices: Not Supported 00:20:13.300 PLE Aggregate Log Change Notices: Not Supported 00:20:13.300 LBA Status Info Alert Notices: Not Supported 00:20:13.300 EGE Aggregate Log Change Notices: Not Supported 00:20:13.300 Normal NVM Subsystem Shutdown event: Not Supported 00:20:13.300 Zone Descriptor Change Notices: Not Supported 00:20:13.300 Discovery Log Change Notices: Not Supported 00:20:13.300 Controller Attributes 00:20:13.300 128-bit Host Identifier: Supported 00:20:13.300 Non-Operational Permissive Mode: Not Supported 00:20:13.300 NVM Sets: Not Supported 00:20:13.300 Read Recovery Levels: Not Supported 00:20:13.300 Endurance Groups: Not Supported 00:20:13.300 Predictable Latency Mode: Not Supported 00:20:13.300 Traffic Based Keep ALive: Not Supported 00:20:13.300 Namespace Granularity: Not Supported 00:20:13.300 SQ Associations: Not Supported 00:20:13.300 UUID List: Not Supported 00:20:13.300 Multi-Domain Subsystem: Not Supported 00:20:13.300 Fixed Capacity Management: Not Supported 00:20:13.300 Variable Capacity Management: Not Supported 00:20:13.300 Delete Endurance Group: Not Supported 00:20:13.300 Delete NVM Set: Not Supported 00:20:13.300 Extended LBA Formats Supported: Not Supported 00:20:13.300 Flexible Data Placement Supported: Not Supported 00:20:13.300 00:20:13.300 Controller Memory Buffer Support 00:20:13.300 ================================ 00:20:13.300 Supported: No 00:20:13.300 00:20:13.300 Persistent Memory Region Support 00:20:13.300 ================================ 00:20:13.300 Supported: No 00:20:13.300 00:20:13.300 Admin Command Set Attributes 00:20:13.300 ============================ 00:20:13.300 Security Send/Receive: Not Supported 00:20:13.300 Format NVM: Not Supported 00:20:13.300 Firmware Activate/Download: Not Supported 00:20:13.300 Namespace Management: Not Supported 00:20:13.300 Device Self-Test: Not Supported 00:20:13.300 Directives: Not Supported 00:20:13.300 NVMe-MI: Not Supported 00:20:13.300 Virtualization Management: Not Supported 00:20:13.300 Doorbell Buffer Config: Not Supported 00:20:13.300 Get LBA Status Capability: Not Supported 00:20:13.300 Command & Feature Lockdown Capability: Not Supported 00:20:13.300 Abort Command Limit: 4 00:20:13.300 Async Event Request Limit: 4 00:20:13.300 Number of Firmware Slots: N/A 00:20:13.300 Firmware Slot 1 Read-Only: N/A 00:20:13.300 Firmware Activation Without Reset: N/A 00:20:13.300 Multiple Update Detection Support: N/A 00:20:13.300 Firmware Update Granularity: No Information Provided 00:20:13.300 Per-Namespace SMART Log: No 00:20:13.300 Asymmetric Namespace Access Log Page: Not Supported 00:20:13.300 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:13.300 Command Effects Log Page: Supported 00:20:13.300 Get Log Page Extended Data: Supported 00:20:13.300 Telemetry Log Pages: Not Supported 00:20:13.300 Persistent Event Log Pages: Not Supported 00:20:13.300 Supported Log Pages Log Page: May Support 00:20:13.300 Commands Supported & Effects Log Page: Not Supported 00:20:13.300 Feature Identifiers & Effects Log Page:May Support 00:20:13.300 NVMe-MI Commands & Effects Log Page: May Support 00:20:13.300 Data Area 4 for Telemetry Log: Not Supported 00:20:13.300 Error Log Page Entries Supported: 128 00:20:13.300 Keep Alive: Supported 00:20:13.300 Keep Alive Granularity: 10000 ms 00:20:13.300 00:20:13.300 NVM Command Set Attributes 00:20:13.300 ========================== 00:20:13.300 Submission Queue Entry Size 00:20:13.300 Max: 64 00:20:13.300 Min: 64 00:20:13.300 Completion Queue Entry Size 00:20:13.300 Max: 16 00:20:13.300 Min: 16 00:20:13.300 Number of Namespaces: 32 00:20:13.300 Compare Command: Supported 00:20:13.300 Write Uncorrectable Command: Not Supported 00:20:13.300 Dataset Management Command: Supported 00:20:13.300 Write Zeroes Command: Supported 00:20:13.300 Set Features Save Field: Not Supported 00:20:13.300 Reservations: Supported 00:20:13.300 Timestamp: Not Supported 00:20:13.300 Copy: Supported 00:20:13.300 Volatile Write Cache: Present 00:20:13.300 Atomic Write Unit (Normal): 1 00:20:13.300 Atomic Write Unit (PFail): 1 00:20:13.300 Atomic Compare & Write Unit: 1 00:20:13.300 Fused Compare & Write: Supported 00:20:13.300 Scatter-Gather List 00:20:13.300 SGL Command Set: Supported 00:20:13.300 SGL Keyed: Supported 00:20:13.300 SGL Bit Bucket Descriptor: Not Supported 00:20:13.300 SGL Metadata Pointer: Not Supported 00:20:13.300 Oversized SGL: Not Supported 00:20:13.300 SGL Metadata Address: Not Supported 00:20:13.300 SGL Offset: Supported 00:20:13.300 Transport SGL Data Block: Not Supported 00:20:13.300 Replay Protected Memory Block: Not Supported 00:20:13.300 00:20:13.300 Firmware Slot Information 00:20:13.300 ========================= 00:20:13.300 Active slot: 1 00:20:13.300 Slot 1 Firmware Revision: 25.01 00:20:13.300 00:20:13.300 00:20:13.300 Commands Supported and Effects 00:20:13.300 ============================== 00:20:13.300 Admin Commands 00:20:13.300 -------------- 00:20:13.300 Get Log Page (02h): Supported 00:20:13.300 Identify (06h): Supported 00:20:13.300 Abort (08h): Supported 00:20:13.300 Set Features (09h): Supported 00:20:13.300 Get Features (0Ah): Supported 00:20:13.300 Asynchronous Event Request (0Ch): Supported 00:20:13.300 Keep Alive (18h): Supported 00:20:13.300 I/O Commands 00:20:13.300 ------------ 00:20:13.300 Flush (00h): Supported LBA-Change 00:20:13.300 Write (01h): Supported LBA-Change 00:20:13.300 Read (02h): Supported 00:20:13.300 Compare (05h): Supported 00:20:13.300 Write Zeroes (08h): Supported LBA-Change 00:20:13.300 Dataset Management (09h): Supported LBA-Change 00:20:13.300 Copy (19h): Supported LBA-Change 00:20:13.300 00:20:13.300 Error Log 00:20:13.300 ========= 00:20:13.300 00:20:13.300 Arbitration 00:20:13.300 =========== 00:20:13.300 Arbitration Burst: 1 00:20:13.300 00:20:13.300 Power Management 00:20:13.300 ================ 00:20:13.300 Number of Power States: 1 00:20:13.300 Current Power State: Power State #0 00:20:13.300 Power State #0: 00:20:13.300 Max Power: 0.00 W 00:20:13.300 Non-Operational State: Operational 00:20:13.300 Entry Latency: Not Reported 00:20:13.300 Exit Latency: Not Reported 00:20:13.300 Relative Read Throughput: 0 00:20:13.300 Relative Read Latency: 0 00:20:13.300 Relative Write Throughput: 0 00:20:13.300 Relative Write Latency: 0 00:20:13.300 Idle Power: Not Reported 00:20:13.300 Active Power: Not Reported 00:20:13.300 Non-Operational Permissive Mode: Not Supported 00:20:13.300 00:20:13.300 Health Information 00:20:13.300 ================== 00:20:13.300 Critical Warnings: 00:20:13.300 Available Spare Space: OK 00:20:13.300 Temperature: OK 00:20:13.300 Device Reliability: OK 00:20:13.300 Read Only: No 00:20:13.300 Volatile Memory Backup: OK 00:20:13.300 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:13.300 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:13.300 Available Spare: 0% 00:20:13.300 Available Spare Threshold: 0% 00:20:13.300 Life Percentage Used:[2024-11-06 13:49:54.092967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.300 [2024-11-06 13:49:54.092972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10aad90) 00:20:13.300 [2024-11-06 13:49:54.092978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.300 [2024-11-06 13:49:54.092995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ec080, cid 7, qid 0 00:20:13.301 [2024-11-06 13:49:54.093045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.093051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.093055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ec080) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093099] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:13.301 [2024-11-06 13:49:54.093108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb600) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.301 [2024-11-06 13:49:54.093120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb780) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.301 [2024-11-06 13:49:54.093130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eb900) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.301 [2024-11-06 13:49:54.093139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.301 [2024-11-06 13:49:54.093152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.301 [2024-11-06 13:49:54.093166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.301 [2024-11-06 13:49:54.093182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.301 [2024-11-06 13:49:54.093224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.093230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.093234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.301 [2024-11-06 13:49:54.093258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.301 [2024-11-06 13:49:54.093273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.301 [2024-11-06 13:49:54.093333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.093339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.093342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093351] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:13.301 [2024-11-06 13:49:54.093356] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:13.301 [2024-11-06 13:49:54.093364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.301 [2024-11-06 13:49:54.093378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.301 [2024-11-06 13:49:54.093391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.301 [2024-11-06 13:49:54.093435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.093441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.093445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.301 [2024-11-06 13:49:54.093471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.301 [2024-11-06 13:49:54.093484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.301 [2024-11-06 13:49:54.093526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.093532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.093535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.301 [2024-11-06 13:49:54.093561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.301 [2024-11-06 13:49:54.093574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.301 [2024-11-06 13:49:54.093614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.093620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.093623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.301 [2024-11-06 13:49:54.093649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.301 [2024-11-06 13:49:54.093662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.301 [2024-11-06 13:49:54.093702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.093707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.093711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.301 [2024-11-06 13:49:54.093737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.301 [2024-11-06 13:49:54.093750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.301 [2024-11-06 13:49:54.093789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.093795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.093798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.093810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.093818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.301 [2024-11-06 13:49:54.093824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.301 [2024-11-06 13:49:54.093837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.301 [2024-11-06 13:49:54.097886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.301 [2024-11-06 13:49:54.097901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.301 [2024-11-06 13:49:54.097905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.097909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.301 [2024-11-06 13:49:54.097919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:13.301 [2024-11-06 13:49:54.097923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:13.302 [2024-11-06 13:49:54.097927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10aad90) 00:20:13.302 [2024-11-06 13:49:54.097934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.302 [2024-11-06 13:49:54.097954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10eba80, cid 3, qid 0 00:20:13.302 [2024-11-06 13:49:54.098002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:13.302 [2024-11-06 13:49:54.098008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:13.302 [2024-11-06 13:49:54.098012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:13.302 [2024-11-06 13:49:54.098016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10eba80) on tqpair=0x10aad90 00:20:13.302 [2024-11-06 13:49:54.098023] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:20:13.302 0% 00:20:13.302 Data Units Read: 0 00:20:13.302 Data Units Written: 0 00:20:13.302 Host Read Commands: 0 00:20:13.302 Host Write Commands: 0 00:20:13.302 Controller Busy Time: 0 minutes 00:20:13.302 Power Cycles: 0 00:20:13.302 Power On Hours: 0 hours 00:20:13.302 Unsafe Shutdowns: 0 00:20:13.302 Unrecoverable Media Errors: 0 00:20:13.302 Lifetime Error Log Entries: 0 00:20:13.302 Warning Temperature Time: 0 minutes 00:20:13.302 Critical Temperature Time: 0 minutes 00:20:13.302 00:20:13.302 Number of Queues 00:20:13.302 ================ 00:20:13.302 Number of I/O Submission Queues: 127 00:20:13.302 Number of I/O Completion Queues: 127 00:20:13.302 00:20:13.302 Active Namespaces 00:20:13.302 ================= 00:20:13.302 Namespace ID:1 00:20:13.302 Error Recovery Timeout: Unlimited 00:20:13.302 Command Set Identifier: NVM (00h) 00:20:13.302 Deallocate: Supported 00:20:13.302 Deallocated/Unwritten Error: Not Supported 00:20:13.302 Deallocated Read Value: Unknown 00:20:13.302 Deallocate in Write Zeroes: Not Supported 00:20:13.302 Deallocated Guard Field: 0xFFFF 00:20:13.302 Flush: Supported 00:20:13.302 Reservation: Supported 00:20:13.302 Namespace Sharing Capabilities: Multiple Controllers 00:20:13.302 Size (in LBAs): 131072 (0GiB) 00:20:13.302 Capacity (in LBAs): 131072 (0GiB) 00:20:13.302 Utilization (in LBAs): 131072 (0GiB) 00:20:13.302 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:13.302 EUI64: ABCDEF0123456789 00:20:13.302 UUID: a8ab0580-55d0-4a50-bc49-6ac25281caf8 00:20:13.302 Thin Provisioning: Not Supported 00:20:13.302 Per-NS Atomic Units: Yes 00:20:13.302 Atomic Boundary Size (Normal): 0 00:20:13.302 Atomic Boundary Size (PFail): 0 00:20:13.302 Atomic Boundary Offset: 0 00:20:13.302 Maximum Single Source Range Length: 65535 00:20:13.302 Maximum Copy Length: 65535 00:20:13.302 Maximum Source Range Count: 1 00:20:13.302 NGUID/EUI64 Never Reused: No 00:20:13.302 Namespace Write Protected: No 00:20:13.302 Number of LBA Formats: 1 00:20:13.302 Current LBA Format: LBA Format #00 00:20:13.302 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:13.302 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:13.302 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:13.302 rmmod nvme_tcp 00:20:13.302 rmmod nvme_fabrics 00:20:13.575 rmmod nvme_keyring 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 87622 ']' 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 87622 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 87622 ']' 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 87622 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87622 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:13.575 killing process with pid 87622 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87622' 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 87622 00:20:13.575 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 87622 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:13.834 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:20:14.094 00:20:14.094 real 0m3.338s 00:20:14.094 user 0m7.766s 00:20:14.094 sys 0m1.094s 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:14.094 ************************************ 00:20:14.094 END TEST nvmf_identify 00:20:14.094 ************************************ 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.094 ************************************ 00:20:14.094 START TEST nvmf_perf 00:20:14.094 ************************************ 00:20:14.094 13:49:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:14.354 * Looking for test storage... 00:20:14.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:14.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.354 --rc genhtml_branch_coverage=1 00:20:14.354 --rc genhtml_function_coverage=1 00:20:14.354 --rc genhtml_legend=1 00:20:14.354 --rc geninfo_all_blocks=1 00:20:14.354 --rc geninfo_unexecuted_blocks=1 00:20:14.354 00:20:14.354 ' 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:14.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.354 --rc genhtml_branch_coverage=1 00:20:14.354 --rc genhtml_function_coverage=1 00:20:14.354 --rc genhtml_legend=1 00:20:14.354 --rc geninfo_all_blocks=1 00:20:14.354 --rc geninfo_unexecuted_blocks=1 00:20:14.354 00:20:14.354 ' 00:20:14.354 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:14.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.354 --rc genhtml_branch_coverage=1 00:20:14.354 --rc genhtml_function_coverage=1 00:20:14.354 --rc genhtml_legend=1 00:20:14.354 --rc geninfo_all_blocks=1 00:20:14.354 --rc geninfo_unexecuted_blocks=1 00:20:14.354 00:20:14.355 ' 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:14.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.355 --rc genhtml_branch_coverage=1 00:20:14.355 --rc genhtml_function_coverage=1 00:20:14.355 --rc genhtml_legend=1 00:20:14.355 --rc geninfo_all_blocks=1 00:20:14.355 --rc geninfo_unexecuted_blocks=1 00:20:14.355 00:20:14.355 ' 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.355 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:14.355 Cannot find device "nvmf_init_br" 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:14.355 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:14.615 Cannot find device "nvmf_init_br2" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:14.615 Cannot find device "nvmf_tgt_br" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:14.615 Cannot find device "nvmf_tgt_br2" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:14.615 Cannot find device "nvmf_init_br" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:14.615 Cannot find device "nvmf_init_br2" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:14.615 Cannot find device "nvmf_tgt_br" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:14.615 Cannot find device "nvmf_tgt_br2" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:14.615 Cannot find device "nvmf_br" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:14.615 Cannot find device "nvmf_init_if" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:14.615 Cannot find device "nvmf_init_if2" 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:14.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:14.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:14.615 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:14.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:14.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:20:14.876 00:20:14.876 --- 10.0.0.3 ping statistics --- 00:20:14.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.876 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:14.876 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:14.876 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:20:14.876 00:20:14.876 --- 10.0.0.4 ping statistics --- 00:20:14.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.876 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:14.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:14.876 00:20:14.876 --- 10.0.0.1 ping statistics --- 00:20:14.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.876 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:14.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:20:14.876 00:20:14.876 --- 10.0.0.2 ping statistics --- 00:20:14.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.876 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=87904 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 87904 00:20:14.876 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 87904 ']' 00:20:14.877 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.877 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:14.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.877 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.877 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:14.877 13:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:15.136 [2024-11-06 13:49:55.858652] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:15.136 [2024-11-06 13:49:55.858722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.136 [2024-11-06 13:49:56.011002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:15.395 [2024-11-06 13:49:56.069266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.395 [2024-11-06 13:49:56.069336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.395 [2024-11-06 13:49:56.069346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.395 [2024-11-06 13:49:56.069355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.395 [2024-11-06 13:49:56.069362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.395 [2024-11-06 13:49:56.070642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.395 [2024-11-06 13:49:56.070896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.395 [2024-11-06 13:49:56.071653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.395 [2024-11-06 13:49:56.071654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.963 13:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:15.963 13:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:20:15.963 13:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.963 13:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.963 13:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:15.963 13:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.963 13:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:15.963 13:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:16.530 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:16.530 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:16.530 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:16.530 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:16.789 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:16.789 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:16.789 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:16.789 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:16.789 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.048 [2024-11-06 13:49:57.836434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.048 13:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:17.307 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:17.307 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:17.566 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:17.566 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:17.566 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:17.826 [2024-11-06 13:49:58.676245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:17.826 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:18.085 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:18.085 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:18.085 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:18.085 13:49:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:19.462 Initializing NVMe Controllers 00:20:19.463 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:19.463 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:19.463 Initialization complete. Launching workers. 00:20:19.463 ======================================================== 00:20:19.463 Latency(us) 00:20:19.463 Device Information : IOPS MiB/s Average min max 00:20:19.463 PCIE (0000:00:10.0) NSID 1 from core 0: 17952.00 70.12 1782.20 678.85 8232.56 00:20:19.463 ======================================================== 00:20:19.463 Total : 17952.00 70.12 1782.20 678.85 8232.56 00:20:19.463 00:20:19.463 13:50:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:20.437 Initializing NVMe Controllers 00:20:20.437 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.437 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.437 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:20.437 Initialization complete. Launching workers. 00:20:20.437 ======================================================== 00:20:20.437 Latency(us) 00:20:20.437 Device Information : IOPS MiB/s Average min max 00:20:20.437 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4970.93 19.42 200.93 78.22 5051.20 00:20:20.437 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.80 0.48 8141.61 7008.91 12042.96 00:20:20.437 ======================================================== 00:20:20.438 Total : 5094.73 19.90 393.88 78.22 12042.96 00:20:20.438 00:20:20.697 13:50:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:22.074 Initializing NVMe Controllers 00:20:22.074 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.074 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:22.074 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:22.074 Initialization complete. Launching workers. 00:20:22.074 ======================================================== 00:20:22.074 Latency(us) 00:20:22.074 Device Information : IOPS MiB/s Average min max 00:20:22.074 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11296.97 44.13 2833.42 586.13 7402.10 00:20:22.074 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2646.99 10.34 12186.05 6862.93 23557.16 00:20:22.074 ======================================================== 00:20:22.074 Total : 13943.96 54.47 4608.83 586.13 23557.16 00:20:22.074 00:20:22.074 13:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:22.074 13:50:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:24.609 Initializing NVMe Controllers 00:20:24.609 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.609 Controller IO queue size 128, less than required. 00:20:24.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.609 Controller IO queue size 128, less than required. 00:20:24.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.609 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.609 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:24.609 Initialization complete. Launching workers. 00:20:24.609 ======================================================== 00:20:24.609 Latency(us) 00:20:24.609 Device Information : IOPS MiB/s Average min max 00:20:24.609 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1831.97 457.99 70893.83 51927.32 155449.31 00:20:24.609 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.82 152.21 217693.50 88299.46 359629.33 00:20:24.609 ======================================================== 00:20:24.609 Total : 2440.79 610.20 107511.07 51927.32 359629.33 00:20:24.609 00:20:24.609 13:50:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:20:24.868 Initializing NVMe Controllers 00:20:24.868 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.868 Controller IO queue size 128, less than required. 00:20:24.868 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.868 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:24.868 Controller IO queue size 128, less than required. 00:20:24.868 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:24.868 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:24.868 WARNING: Some requested NVMe devices were skipped 00:20:24.868 No valid NVMe controllers or AIO or URING devices found 00:20:24.868 13:50:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:20:27.406 Initializing NVMe Controllers 00:20:27.406 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.406 Controller IO queue size 128, less than required. 00:20:27.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:27.406 Controller IO queue size 128, less than required. 00:20:27.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:27.406 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:27.406 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:27.406 Initialization complete. Launching workers. 00:20:27.406 00:20:27.406 ==================== 00:20:27.406 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:27.406 TCP transport: 00:20:27.406 polls: 8120 00:20:27.406 idle_polls: 5177 00:20:27.406 sock_completions: 2943 00:20:27.406 nvme_completions: 5941 00:20:27.406 submitted_requests: 8902 00:20:27.406 queued_requests: 1 00:20:27.406 00:20:27.406 ==================== 00:20:27.406 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:27.406 TCP transport: 00:20:27.406 polls: 8507 00:20:27.406 idle_polls: 5490 00:20:27.406 sock_completions: 3017 00:20:27.406 nvme_completions: 6277 00:20:27.406 submitted_requests: 9480 00:20:27.406 queued_requests: 1 00:20:27.406 ======================================================== 00:20:27.406 Latency(us) 00:20:27.406 Device Information : IOPS MiB/s Average min max 00:20:27.406 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1484.51 371.13 88332.75 53621.27 148804.48 00:20:27.406 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1568.49 392.12 82661.86 46432.41 141468.86 00:20:27.406 ======================================================== 00:20:27.406 Total : 3053.00 763.25 85419.32 46432.41 148804.48 00:20:27.406 00:20:27.406 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:27.406 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.666 rmmod nvme_tcp 00:20:27.666 rmmod nvme_fabrics 00:20:27.666 rmmod nvme_keyring 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 87904 ']' 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 87904 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 87904 ']' 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 87904 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87904 00:20:27.666 killing process with pid 87904 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87904' 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 87904 00:20:27.666 13:50:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 87904 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:28.604 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:28.865 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:28.865 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.865 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.865 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.865 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:20:28.865 00:20:28.865 real 0m14.620s 00:20:28.865 user 0m51.174s 00:20:28.865 sys 0m4.380s 00:20:28.865 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:28.865 13:50:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.865 ************************************ 00:20:28.865 END TEST nvmf_perf 00:20:28.865 ************************************ 00:20:28.866 13:50:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:28.866 13:50:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:28.866 13:50:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:28.866 13:50:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.866 ************************************ 00:20:28.866 START TEST nvmf_fio_host 00:20:28.866 ************************************ 00:20:28.866 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:28.866 * Looking for test storage... 00:20:28.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:28.866 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:29.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.128 --rc genhtml_branch_coverage=1 00:20:29.128 --rc genhtml_function_coverage=1 00:20:29.128 --rc genhtml_legend=1 00:20:29.128 --rc geninfo_all_blocks=1 00:20:29.128 --rc geninfo_unexecuted_blocks=1 00:20:29.128 00:20:29.128 ' 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:29.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.128 --rc genhtml_branch_coverage=1 00:20:29.128 --rc genhtml_function_coverage=1 00:20:29.128 --rc genhtml_legend=1 00:20:29.128 --rc geninfo_all_blocks=1 00:20:29.128 --rc geninfo_unexecuted_blocks=1 00:20:29.128 00:20:29.128 ' 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:29.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.128 --rc genhtml_branch_coverage=1 00:20:29.128 --rc genhtml_function_coverage=1 00:20:29.128 --rc genhtml_legend=1 00:20:29.128 --rc geninfo_all_blocks=1 00:20:29.128 --rc geninfo_unexecuted_blocks=1 00:20:29.128 00:20:29.128 ' 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:29.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.128 --rc genhtml_branch_coverage=1 00:20:29.128 --rc genhtml_function_coverage=1 00:20:29.128 --rc genhtml_legend=1 00:20:29.128 --rc geninfo_all_blocks=1 00:20:29.128 --rc geninfo_unexecuted_blocks=1 00:20:29.128 00:20:29.128 ' 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.128 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:29.129 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:29.129 Cannot find device "nvmf_init_br" 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:29.129 Cannot find device "nvmf_init_br2" 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:29.129 Cannot find device "nvmf_tgt_br" 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:20:29.129 13:50:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.129 Cannot find device "nvmf_tgt_br2" 00:20:29.129 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:20:29.129 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:29.129 Cannot find device "nvmf_init_br" 00:20:29.129 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:20:29.129 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:29.129 Cannot find device "nvmf_init_br2" 00:20:29.129 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:20:29.129 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:29.387 Cannot find device "nvmf_tgt_br" 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:29.387 Cannot find device "nvmf_tgt_br2" 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:29.387 Cannot find device "nvmf_br" 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:29.387 Cannot find device "nvmf_init_if" 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:29.387 Cannot find device "nvmf_init_if2" 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.387 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:29.388 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.651 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:29.652 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.652 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:20:29.652 00:20:29.652 --- 10.0.0.3 ping statistics --- 00:20:29.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.652 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:29.652 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:29.652 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:20:29.652 00:20:29.652 --- 10.0.0.4 ping statistics --- 00:20:29.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.652 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:20:29.652 00:20:29.652 --- 10.0.0.1 ping statistics --- 00:20:29.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.652 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:29.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:29.652 00:20:29.652 --- 10.0.0.2 ping statistics --- 00:20:29.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.652 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88440 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88440 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 88440 ']' 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:29.652 13:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.652 [2024-11-06 13:50:10.556717] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:29.652 [2024-11-06 13:50:10.556795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.916 [2024-11-06 13:50:10.708797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.916 [2024-11-06 13:50:10.769885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.916 [2024-11-06 13:50:10.769937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.916 [2024-11-06 13:50:10.769947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.916 [2024-11-06 13:50:10.769955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.916 [2024-11-06 13:50:10.769978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.916 [2024-11-06 13:50:10.771373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.916 [2024-11-06 13:50:10.771548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.916 [2024-11-06 13:50:10.771586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.916 [2024-11-06 13:50:10.771592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.853 13:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.853 13:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:20:30.853 13:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:30.853 [2024-11-06 13:50:11.636009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.853 13:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:30.853 13:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.853 13:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.853 13:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:31.112 Malloc1 00:20:31.112 13:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:31.371 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:31.630 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:31.889 [2024-11-06 13:50:12.603103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:31.889 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:32.149 13:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:20:32.149 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:32.149 fio-3.35 00:20:32.149 Starting 1 thread 00:20:34.684 00:20:34.684 test: (groupid=0, jobs=1): err= 0: pid=88565: Wed Nov 6 13:50:15 2024 00:20:34.684 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(78.5MiB/2006msec) 00:20:34.684 slat (nsec): min=1504, max=407928, avg=1698.83, stdev=3622.62 00:20:34.684 clat (usec): min=3769, max=18987, avg=6685.33, stdev=1189.74 00:20:34.684 lat (usec): min=3771, max=18989, avg=6687.03, stdev=1189.81 00:20:34.684 clat percentiles (usec): 00:20:34.684 | 1.00th=[ 4948], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:20:34.684 | 30.00th=[ 5800], 40.00th=[ 6128], 50.00th=[ 6915], 60.00th=[ 7177], 00:20:34.684 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8029], 00:20:34.684 | 99.00th=[10159], 99.50th=[12518], 99.90th=[16909], 99.95th=[17171], 00:20:34.684 | 99.99th=[17957] 00:20:34.684 bw ( KiB/s): min=35984, max=46400, per=99.96%, avg=40038.00, stdev=4904.20, samples=4 00:20:34.684 iops : min= 8996, max=11600, avg=10009.50, stdev=1226.05, samples=4 00:20:34.684 write: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(78.6MiB/2006msec); 0 zone resets 00:20:34.684 slat (nsec): min=1539, max=358811, avg=1737.89, stdev=2665.44 00:20:34.684 clat (usec): min=2925, max=17900, avg=6038.84, stdev=1083.29 00:20:34.684 lat (usec): min=2940, max=17902, avg=6040.58, stdev=1083.27 00:20:34.684 clat percentiles (usec): 00:20:34.684 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5080], 00:20:34.684 | 30.00th=[ 5211], 40.00th=[ 5473], 50.00th=[ 6194], 60.00th=[ 6456], 00:20:34.684 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:20:34.684 | 99.00th=[ 8225], 99.50th=[11207], 99.90th=[15926], 99.95th=[16581], 00:20:34.684 | 99.99th=[17957] 00:20:34.684 bw ( KiB/s): min=36416, max=46904, per=100.00%, avg=40110.00, stdev=4906.79, samples=4 00:20:34.684 iops : min= 9104, max=11726, avg=10027.50, stdev=1226.70, samples=4 00:20:34.684 lat (msec) : 4=0.26%, 10=98.87%, 20=0.87% 00:20:34.684 cpu : usr=68.18%, sys=25.74%, ctx=9, majf=0, minf=7 00:20:34.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:34.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:34.684 issued rwts: total=20088,20113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:34.684 00:20:34.684 Run status group 0 (all jobs): 00:20:34.684 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=78.5MiB (82.3MB), run=2006-2006msec 00:20:34.684 WRITE: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=78.6MiB (82.4MB), run=2006-2006msec 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:34.684 13:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:20:34.684 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:34.684 fio-3.35 00:20:34.684 Starting 1 thread 00:20:37.221 00:20:37.221 test: (groupid=0, jobs=1): err= 0: pid=88608: Wed Nov 6 13:50:17 2024 00:20:37.221 read: IOPS=10.7k, BW=167MiB/s (176MB/s)(336MiB/2005msec) 00:20:37.221 slat (nsec): min=2401, max=89504, avg=2699.09, stdev=1460.91 00:20:37.221 clat (usec): min=1762, max=14632, avg=6982.83, stdev=1708.40 00:20:37.221 lat (usec): min=1764, max=14639, avg=6985.53, stdev=1708.59 00:20:37.221 clat percentiles (usec): 00:20:37.221 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:20:37.221 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7439], 00:20:37.221 | 70.00th=[ 7898], 80.00th=[ 8356], 90.00th=[ 9110], 95.00th=[ 9765], 00:20:37.221 | 99.00th=[11469], 99.50th=[12911], 99.90th=[13829], 99.95th=[14091], 00:20:37.221 | 99.99th=[14484] 00:20:37.221 bw ( KiB/s): min=80704, max=97760, per=50.62%, avg=86768.00, stdev=7522.50, samples=4 00:20:37.221 iops : min= 5044, max= 6110, avg=5423.00, stdev=470.16, samples=4 00:20:37.221 write: IOPS=6465, BW=101MiB/s (106MB/s)(178MiB/1758msec); 0 zone resets 00:20:37.221 slat (usec): min=27, max=347, avg=29.83, stdev= 7.26 00:20:37.221 clat (usec): min=3381, max=15449, avg=8671.25, stdev=1580.70 00:20:37.221 lat (usec): min=3409, max=15566, avg=8701.07, stdev=1582.50 00:20:37.221 clat percentiles (usec): 00:20:37.221 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7439], 00:20:37.221 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:20:37.221 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10945], 95.00th=[11731], 00:20:37.221 | 99.00th=[13173], 99.50th=[14091], 99.90th=[14877], 99.95th=[15008], 00:20:37.221 | 99.99th=[15401] 00:20:37.221 bw ( KiB/s): min=83904, max=101920, per=87.39%, avg=90400.00, stdev=7907.61, samples=4 00:20:37.221 iops : min= 5244, max= 6370, avg=5650.00, stdev=494.23, samples=4 00:20:37.221 lat (msec) : 2=0.03%, 4=1.62%, 10=89.45%, 20=8.90% 00:20:37.221 cpu : usr=73.90%, sys=18.16%, ctx=3, majf=0, minf=14 00:20:37.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:37.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:37.221 issued rwts: total=21480,11366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:37.221 00:20:37.221 Run status group 0 (all jobs): 00:20:37.221 READ: bw=167MiB/s (176MB/s), 167MiB/s-167MiB/s (176MB/s-176MB/s), io=336MiB (352MB), run=2005-2005msec 00:20:37.221 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=178MiB (186MB), run=1758-1758msec 00:20:37.221 13:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.221 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:37.221 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:37.221 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:37.221 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:37.221 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.221 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.481 rmmod nvme_tcp 00:20:37.481 rmmod nvme_fabrics 00:20:37.481 rmmod nvme_keyring 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 88440 ']' 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 88440 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 88440 ']' 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 88440 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88440 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:37.481 killing process with pid 88440 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88440' 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 88440 00:20:37.481 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 88440 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.741 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.008 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.009 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.009 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.009 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:20:38.009 00:20:38.009 real 0m9.216s 00:20:38.009 user 0m35.009s 00:20:38.009 sys 0m2.864s 00:20:38.009 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:38.009 13:50:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.009 ************************************ 00:20:38.009 END TEST nvmf_fio_host 00:20:38.009 ************************************ 00:20:38.311 13:50:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:38.311 13:50:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:38.311 13:50:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:38.311 13:50:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.311 ************************************ 00:20:38.311 START TEST nvmf_failover 00:20:38.311 ************************************ 00:20:38.311 13:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:38.311 * Looking for test storage... 00:20:38.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.311 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:38.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.312 --rc genhtml_branch_coverage=1 00:20:38.312 --rc genhtml_function_coverage=1 00:20:38.312 --rc genhtml_legend=1 00:20:38.312 --rc geninfo_all_blocks=1 00:20:38.312 --rc geninfo_unexecuted_blocks=1 00:20:38.312 00:20:38.312 ' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:38.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.312 --rc genhtml_branch_coverage=1 00:20:38.312 --rc genhtml_function_coverage=1 00:20:38.312 --rc genhtml_legend=1 00:20:38.312 --rc geninfo_all_blocks=1 00:20:38.312 --rc geninfo_unexecuted_blocks=1 00:20:38.312 00:20:38.312 ' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:38.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.312 --rc genhtml_branch_coverage=1 00:20:38.312 --rc genhtml_function_coverage=1 00:20:38.312 --rc genhtml_legend=1 00:20:38.312 --rc geninfo_all_blocks=1 00:20:38.312 --rc geninfo_unexecuted_blocks=1 00:20:38.312 00:20:38.312 ' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:38.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.312 --rc genhtml_branch_coverage=1 00:20:38.312 --rc genhtml_function_coverage=1 00:20:38.312 --rc genhtml_legend=1 00:20:38.312 --rc geninfo_all_blocks=1 00:20:38.312 --rc geninfo_unexecuted_blocks=1 00:20:38.312 00:20:38.312 ' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.312 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.312 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.572 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:38.573 Cannot find device "nvmf_init_br" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:38.573 Cannot find device "nvmf_init_br2" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:38.573 Cannot find device "nvmf_tgt_br" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.573 Cannot find device "nvmf_tgt_br2" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:38.573 Cannot find device "nvmf_init_br" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:38.573 Cannot find device "nvmf_init_br2" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:38.573 Cannot find device "nvmf_tgt_br" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:38.573 Cannot find device "nvmf_tgt_br2" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:38.573 Cannot find device "nvmf_br" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:38.573 Cannot find device "nvmf_init_if" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:38.573 Cannot find device "nvmf_init_if2" 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:38.573 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:38.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:38.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:20:38.833 00:20:38.833 --- 10.0.0.3 ping statistics --- 00:20:38.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.833 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:38.833 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:38.833 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:20:38.833 00:20:38.833 --- 10.0.0.4 ping statistics --- 00:20:38.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.833 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:38.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:20:38.833 00:20:38.833 --- 10.0.0.1 ping statistics --- 00:20:38.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.833 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:38.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:38.833 00:20:38.833 --- 10.0.0.2 ping statistics --- 00:20:38.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.833 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.833 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=88895 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 88895 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 88895 ']' 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.093 13:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:39.093 [2024-11-06 13:50:19.837806] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:39.093 [2024-11-06 13:50:19.837898] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.093 [2024-11-06 13:50:19.988573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:39.352 [2024-11-06 13:50:20.058482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.352 [2024-11-06 13:50:20.058543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.352 [2024-11-06 13:50:20.058554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.352 [2024-11-06 13:50:20.058562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.352 [2024-11-06 13:50:20.058585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.352 [2024-11-06 13:50:20.059938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.352 [2024-11-06 13:50:20.060036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.352 [2024-11-06 13:50:20.060038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.921 13:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:39.921 13:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:39.921 13:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.921 13:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.921 13:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:39.921 13:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.921 13:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:40.180 [2024-11-06 13:50:20.963272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.180 13:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:40.439 Malloc0 00:20:40.439 13:50:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.698 13:50:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.958 13:50:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:40.958 [2024-11-06 13:50:21.841187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:40.958 13:50:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:41.216 [2024-11-06 13:50:22.049024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:41.216 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:41.475 [2024-11-06 13:50:22.248877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89001 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89001 /var/tmp/bdevperf.sock 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 89001 ']' 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:41.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:41.475 13:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:42.410 13:50:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:42.410 13:50:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:42.410 13:50:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:42.668 NVMe0n1 00:20:42.669 13:50:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:42.926 00:20:42.926 13:50:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89049 00:20:42.926 13:50:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:42.926 13:50:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:43.861 13:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:44.120 [2024-11-06 13:50:24.982576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.120 [2024-11-06 13:50:24.982941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.982949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.982958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.982966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.982974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.982982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.982990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.121 [2024-11-06 13:50:24.983644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1628830 is same with the state(6) to be set 00:20:44.122 13:50:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:47.415 13:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:47.415 00:20:47.415 13:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:47.674 [2024-11-06 13:50:28.504074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.674 [2024-11-06 13:50:28.504228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 [2024-11-06 13:50:28.504452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16295e0 is same with the state(6) to be set 00:20:47.675 13:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:50.966 13:50:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:50.966 [2024-11-06 13:50:31.734175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:50.966 13:50:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:51.903 13:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:52.162 [2024-11-06 13:50:32.966889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.966944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.966954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.966963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.966972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.966980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.966989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.966997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.162 [2024-11-06 13:50:32.967091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 [2024-11-06 13:50:32.967607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17735b0 is same with the state(6) to be set 00:20:52.163 13:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89049 00:20:58.738 { 00:20:58.738 "results": [ 00:20:58.738 { 00:20:58.738 "job": "NVMe0n1", 00:20:58.738 "core_mask": "0x1", 00:20:58.738 "workload": "verify", 00:20:58.738 "status": "finished", 00:20:58.738 "verify_range": { 00:20:58.738 "start": 0, 00:20:58.738 "length": 16384 00:20:58.738 }, 00:20:58.738 "queue_depth": 128, 00:20:58.738 "io_size": 4096, 00:20:58.738 "runtime": 15.008173, 00:20:58.738 "iops": 11592.217120631538, 00:20:58.738 "mibps": 45.282098127466945, 00:20:58.738 "io_failed": 3773, 00:20:58.738 "io_timeout": 0, 00:20:58.738 "avg_latency_us": 10786.07137508521, 00:20:58.738 "min_latency_us": 440.8546184738956, 00:20:58.738 "max_latency_us": 22740.202409638554 00:20:58.738 } 00:20:58.738 ], 00:20:58.738 "core_count": 1 00:20:58.738 } 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89001 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 89001 ']' 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 89001 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89001 00:20:58.738 killing process with pid 89001 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89001' 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 89001 00:20:58.738 13:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 89001 00:20:58.738 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:58.738 [2024-11-06 13:50:22.307592] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:20:58.739 [2024-11-06 13:50:22.307689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89001 ] 00:20:58.739 [2024-11-06 13:50:22.459973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.739 [2024-11-06 13:50:22.521924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.739 Running I/O for 15 seconds... 00:20:58.739 11919.00 IOPS, 46.56 MiB/s [2024-11-06T13:50:39.672Z] [2024-11-06 13:50:24.984139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.984973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.984986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.739 [2024-11-06 13:50:24.985255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.739 [2024-11-06 13:50:24.985267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.740 [2024-11-06 13:50:24.985972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.985987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.740 [2024-11-06 13:50:24.986383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.740 [2024-11-06 13:50:24.986397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.986986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.986999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.741 [2024-11-06 13:50:24.987502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.741 [2024-11-06 13:50:24.987516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:24.987529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:24.987556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:24.987583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:24.987610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:24.987637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:24.987664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:24.987696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:24.987723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:24.987750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e1360 is same with the state(6) to be set 00:20:58.742 [2024-11-06 13:50:24.987797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.742 [2024-11-06 13:50:24.987807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.742 [2024-11-06 13:50:24.987817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107160 len:8 PRP1 0x0 PRP2 0x0 00:20:58.742 [2024-11-06 13:50:24.987830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987904] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:58.742 [2024-11-06 13:50:24.987958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.742 [2024-11-06 13:50:24.987974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.987988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.742 [2024-11-06 13:50:24.988001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.988015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.742 [2024-11-06 13:50:24.988028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.988041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.742 [2024-11-06 13:50:24.988054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:24.988067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:58.742 [2024-11-06 13:50:24.990802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:58.742 [2024-11-06 13:50:24.990842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074f30 (9): Bad file descriptor 00:20:58.742 [2024-11-06 13:50:25.012766] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:58.742 11732.00 IOPS, 45.83 MiB/s [2024-11-06T13:50:39.675Z] 11821.33 IOPS, 46.18 MiB/s [2024-11-06T13:50:39.675Z] 11860.50 IOPS, 46.33 MiB/s [2024-11-06T13:50:39.675Z] [2024-11-06 13:50:28.504645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.504983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.742 [2024-11-06 13:50:28.504995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.742 [2024-11-06 13:50:28.505328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.742 [2024-11-06 13:50:28.505342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.505976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.505991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.743 [2024-11-06 13:50:28.506308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.743 [2024-11-06 13:50:28.506322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.506978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.506992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.744 [2024-11-06 13:50:28.507427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.744 [2024-11-06 13:50:28.507441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.745 [2024-11-06 13:50:28.507701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.507746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57680 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.507759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.507785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.507795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57688 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.507808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.507830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.507840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57696 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.507853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.507884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.507894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57704 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.507907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.507929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.507938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57712 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.507950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.507969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.507978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.507988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57720 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57728 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57736 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57744 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57752 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57760 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57768 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57776 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57784 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57792 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57800 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57808 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.745 [2024-11-06 13:50:28.508522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57816 len:8 PRP1 0x0 PRP2 0x0 00:20:58.745 [2024-11-06 13:50:28.508534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.745 [2024-11-06 13:50:28.508547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.745 [2024-11-06 13:50:28.508556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.746 [2024-11-06 13:50:28.508565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57824 len:8 PRP1 0x0 PRP2 0x0 00:20:58.746 [2024-11-06 13:50:28.508577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:28.508590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.746 [2024-11-06 13:50:28.508599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.746 [2024-11-06 13:50:28.508608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57832 len:8 PRP1 0x0 PRP2 0x0 00:20:58.746 [2024-11-06 13:50:28.508620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:28.508633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.746 [2024-11-06 13:50:28.508647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.746 [2024-11-06 13:50:28.508657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57840 len:8 PRP1 0x0 PRP2 0x0 00:20:58.746 [2024-11-06 13:50:28.508669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:28.508738] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:20:58.746 [2024-11-06 13:50:28.508784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.746 [2024-11-06 13:50:28.508799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:28.508813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.746 [2024-11-06 13:50:28.508825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:28.508838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.746 [2024-11-06 13:50:28.508851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:28.508873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.746 [2024-11-06 13:50:28.508886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:28.508899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:58.746 [2024-11-06 13:50:28.508956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074f30 (9): Bad file descriptor 00:20:58.746 [2024-11-06 13:50:28.511657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:58.746 [2024-11-06 13:50:28.533736] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:20:58.746 11756.80 IOPS, 45.92 MiB/s [2024-11-06T13:50:39.679Z] 11777.83 IOPS, 46.01 MiB/s [2024-11-06T13:50:39.679Z] 11815.57 IOPS, 46.15 MiB/s [2024-11-06T13:50:39.679Z] 11830.00 IOPS, 46.21 MiB/s [2024-11-06T13:50:39.679Z] 11845.22 IOPS, 46.27 MiB/s [2024-11-06T13:50:39.679Z] [2024-11-06 13:50:32.969005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.746 [2024-11-06 13:50:32.969917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.746 [2024-11-06 13:50:32.969931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.747 [2024-11-06 13:50:32.969944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.969958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.747 [2024-11-06 13:50:32.969972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.969992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.747 [2024-11-06 13:50:32.970005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.747 [2024-11-06 13:50:32.970032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.747 [2024-11-06 13:50:32.970065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.747 [2024-11-06 13:50:32.970093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.747 [2024-11-06 13:50:32.970120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.747 [2024-11-06 13:50:32.970923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.747 [2024-11-06 13:50:32.970936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.970950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.970964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.970979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.970992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.971983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.971998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.972011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.972025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.972038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.748 [2024-11-06 13:50:32.972052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.748 [2024-11-06 13:50:32.972065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.749 [2024-11-06 13:50:32.972565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.749 [2024-11-06 13:50:32.972608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98544 len:8 PRP1 0x0 PRP2 0x0 00:20:58.749 [2024-11-06 13:50:32.972621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.749 [2024-11-06 13:50:32.972647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.749 [2024-11-06 13:50:32.972657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98552 len:8 PRP1 0x0 PRP2 0x0 00:20:58.749 [2024-11-06 13:50:32.972670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.749 [2024-11-06 13:50:32.972692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.749 [2024-11-06 13:50:32.972701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98560 len:8 PRP1 0x0 PRP2 0x0 00:20:58.749 [2024-11-06 13:50:32.972714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.749 [2024-11-06 13:50:32.972736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.749 [2024-11-06 13:50:32.972748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 00:20:58.749 [2024-11-06 13:50:32.972760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.749 [2024-11-06 13:50:32.972782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.749 [2024-11-06 13:50:32.972792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98576 len:8 PRP1 0x0 PRP2 0x0 00:20:58.749 [2024-11-06 13:50:32.972805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.749 [2024-11-06 13:50:32.972828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.749 [2024-11-06 13:50:32.972837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98584 len:8 PRP1 0x0 PRP2 0x0 00:20:58.749 [2024-11-06 13:50:32.972855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.972930] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:20:58.749 [2024-11-06 13:50:32.972980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.749 [2024-11-06 13:50:32.972996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.973010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.749 [2024-11-06 13:50:32.973023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.973037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.749 [2024-11-06 13:50:32.973050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.973064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.749 [2024-11-06 13:50:32.973079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.749 [2024-11-06 13:50:32.973092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:58.749 [2024-11-06 13:50:32.973133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1074f30 (9): Bad file descriptor 00:20:58.749 [2024-11-06 13:50:32.975835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:58.749 [2024-11-06 13:50:33.014725] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:20:58.749 11769.70 IOPS, 45.98 MiB/s [2024-11-06T13:50:39.682Z] 11771.73 IOPS, 45.98 MiB/s [2024-11-06T13:50:39.682Z] 11730.75 IOPS, 45.82 MiB/s [2024-11-06T13:50:39.682Z] 11564.62 IOPS, 45.17 MiB/s [2024-11-06T13:50:39.682Z] 11581.07 IOPS, 45.24 MiB/s [2024-11-06T13:50:39.682Z] 11591.40 IOPS, 45.28 MiB/s 00:20:58.749 Latency(us) 00:20:58.749 [2024-11-06T13:50:39.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.749 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:58.749 Verification LBA range: start 0x0 length 0x4000 00:20:58.749 NVMe0n1 : 15.01 11592.22 45.28 251.40 0.00 10786.07 440.85 22740.20 00:20:58.749 [2024-11-06T13:50:39.682Z] =================================================================================================================== 00:20:58.749 [2024-11-06T13:50:39.682Z] Total : 11592.22 45.28 251.40 0.00 10786.07 440.85 22740.20 00:20:58.749 Received shutdown signal, test time was about 15.000000 seconds 00:20:58.749 00:20:58.749 Latency(us) 00:20:58.749 [2024-11-06T13:50:39.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.749 [2024-11-06T13:50:39.682Z] =================================================================================================================== 00:20:58.749 [2024-11-06T13:50:39.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89258 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89258 /var/tmp/bdevperf.sock 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 89258 ']' 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:58.750 13:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:59.317 13:50:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:59.317 13:50:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:20:59.317 13:50:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:59.576 [2024-11-06 13:50:40.334252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:59.576 13:50:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:20:59.835 [2024-11-06 13:50:40.550294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:20:59.835 13:50:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:00.095 NVMe0n1 00:21:00.095 13:50:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:00.354 00:21:00.354 13:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:00.613 00:21:00.613 13:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:00.613 13:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.872 13:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.131 13:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:04.420 13:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:04.420 13:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:04.420 13:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89391 00:21:04.420 13:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.420 13:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89391 00:21:05.358 { 00:21:05.358 "results": [ 00:21:05.358 { 00:21:05.358 "job": "NVMe0n1", 00:21:05.358 "core_mask": "0x1", 00:21:05.358 "workload": "verify", 00:21:05.358 "status": "finished", 00:21:05.358 "verify_range": { 00:21:05.358 "start": 0, 00:21:05.358 "length": 16384 00:21:05.358 }, 00:21:05.359 "queue_depth": 128, 00:21:05.359 "io_size": 4096, 00:21:05.359 "runtime": 1.00931, 00:21:05.359 "iops": 11737.721809949371, 00:21:05.359 "mibps": 45.85047582011473, 00:21:05.359 "io_failed": 0, 00:21:05.359 "io_timeout": 0, 00:21:05.359 "avg_latency_us": 10861.415706753747, 00:21:05.359 "min_latency_us": 1500.2216867469879, 00:21:05.359 "max_latency_us": 12107.052208835341 00:21:05.359 } 00:21:05.359 ], 00:21:05.359 "core_count": 1 00:21:05.359 } 00:21:05.359 13:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:05.359 [2024-11-06 13:50:39.258711] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:21:05.359 [2024-11-06 13:50:39.258798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89258 ] 00:21:05.359 [2024-11-06 13:50:39.415636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.359 [2024-11-06 13:50:39.474110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.359 [2024-11-06 13:50:41.810127] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:21:05.359 [2024-11-06 13:50:41.810235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.359 [2024-11-06 13:50:41.810257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.359 [2024-11-06 13:50:41.810278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.359 [2024-11-06 13:50:41.810291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.359 [2024-11-06 13:50:41.810305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.359 [2024-11-06 13:50:41.810319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.359 [2024-11-06 13:50:41.810332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.359 [2024-11-06 13:50:41.810345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.359 [2024-11-06 13:50:41.810359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:21:05.359 [2024-11-06 13:50:41.810408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:21:05.359 [2024-11-06 13:50:41.810433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3af30 (9): Bad file descriptor 00:21:05.359 [2024-11-06 13:50:41.817346] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:21:05.359 Running I/O for 1 seconds... 00:21:05.359 11716.00 IOPS, 45.77 MiB/s 00:21:05.359 Latency(us) 00:21:05.359 [2024-11-06T13:50:46.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.359 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:05.359 Verification LBA range: start 0x0 length 0x4000 00:21:05.359 NVMe0n1 : 1.01 11737.72 45.85 0.00 0.00 10861.42 1500.22 12107.05 00:21:05.359 [2024-11-06T13:50:46.292Z] =================================================================================================================== 00:21:05.359 [2024-11-06T13:50:46.292Z] Total : 11737.72 45.85 0.00 0.00 10861.42 1500.22 12107.05 00:21:05.359 13:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:05.359 13:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.618 13:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.877 13:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.877 13:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:06.137 13:50:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:06.137 13:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89258 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 89258 ']' 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 89258 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89258 00:21:09.428 killing process with pid 89258 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89258' 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 89258 00:21:09.428 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 89258 00:21:09.688 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.947 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:10.206 rmmod nvme_tcp 00:21:10.206 rmmod nvme_fabrics 00:21:10.206 rmmod nvme_keyring 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 88895 ']' 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 88895 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 88895 ']' 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 88895 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88895 00:21:10.206 killing process with pid 88895 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88895' 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 88895 00:21:10.206 13:50:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 88895 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:10.466 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:21:10.726 00:21:10.726 real 0m32.561s 00:21:10.726 user 2m2.769s 00:21:10.726 sys 0m6.002s 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.726 ************************************ 00:21:10.726 END TEST nvmf_failover 00:21:10.726 ************************************ 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.726 ************************************ 00:21:10.726 START TEST nvmf_host_discovery 00:21:10.726 ************************************ 00:21:10.726 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:10.986 * Looking for test storage... 00:21:10.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:10.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.986 --rc genhtml_branch_coverage=1 00:21:10.986 --rc genhtml_function_coverage=1 00:21:10.986 --rc genhtml_legend=1 00:21:10.986 --rc geninfo_all_blocks=1 00:21:10.986 --rc geninfo_unexecuted_blocks=1 00:21:10.986 00:21:10.986 ' 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:10.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.986 --rc genhtml_branch_coverage=1 00:21:10.986 --rc genhtml_function_coverage=1 00:21:10.986 --rc genhtml_legend=1 00:21:10.986 --rc geninfo_all_blocks=1 00:21:10.986 --rc geninfo_unexecuted_blocks=1 00:21:10.986 00:21:10.986 ' 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:10.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.986 --rc genhtml_branch_coverage=1 00:21:10.986 --rc genhtml_function_coverage=1 00:21:10.986 --rc genhtml_legend=1 00:21:10.986 --rc geninfo_all_blocks=1 00:21:10.986 --rc geninfo_unexecuted_blocks=1 00:21:10.986 00:21:10.986 ' 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:10.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.986 --rc genhtml_branch_coverage=1 00:21:10.986 --rc genhtml_function_coverage=1 00:21:10.986 --rc genhtml_legend=1 00:21:10.986 --rc geninfo_all_blocks=1 00:21:10.986 --rc geninfo_unexecuted_blocks=1 00:21:10.986 00:21:10.986 ' 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.986 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.987 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:10.987 Cannot find device "nvmf_init_br" 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:21:10.987 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:11.246 Cannot find device "nvmf_init_br2" 00:21:11.246 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:21:11.246 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:11.246 Cannot find device "nvmf_tgt_br" 00:21:11.246 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:21:11.246 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:11.246 Cannot find device "nvmf_tgt_br2" 00:21:11.246 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:21:11.246 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:11.246 Cannot find device "nvmf_init_br" 00:21:11.246 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:21:11.246 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:11.247 Cannot find device "nvmf_init_br2" 00:21:11.247 13:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:11.247 Cannot find device "nvmf_tgt_br" 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:11.247 Cannot find device "nvmf_tgt_br2" 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:11.247 Cannot find device "nvmf_br" 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:11.247 Cannot find device "nvmf_init_if" 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:11.247 Cannot find device "nvmf_init_if2" 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:11.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:11.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:11.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:11.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:11.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:21:11.507 00:21:11.507 --- 10.0.0.3 ping statistics --- 00:21:11.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.507 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:11.507 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:11.507 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:21:11.507 00:21:11.507 --- 10.0.0.4 ping statistics --- 00:21:11.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.507 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:11.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:21:11.507 00:21:11.507 --- 10.0.0.1 ping statistics --- 00:21:11.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.507 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:11.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:21:11.507 00:21:11.507 --- 10.0.0.2 ping statistics --- 00:21:11.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.507 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=89751 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 89751 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 89751 ']' 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.507 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:11.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.767 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.767 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:11.767 13:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.767 [2024-11-06 13:50:52.492847] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:21:11.767 [2024-11-06 13:50:52.492932] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.767 [2024-11-06 13:50:52.646495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.026 [2024-11-06 13:50:52.708359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.026 [2024-11-06 13:50:52.708564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.026 [2024-11-06 13:50:52.708583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.026 [2024-11-06 13:50:52.708592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.026 [2024-11-06 13:50:52.708599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.026 [2024-11-06 13:50:52.708995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.595 [2024-11-06 13:50:53.435447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.595 [2024-11-06 13:50:53.443593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.595 null0 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.595 null1 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89801 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89801 /tmp/host.sock 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 89801 ']' 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:12.595 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:12.595 13:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.855 [2024-11-06 13:50:53.539348] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:21:12.855 [2024-11-06 13:50:53.539414] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89801 ] 00:21:12.855 [2024-11-06 13:50:53.693950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.855 [2024-11-06 13:50:53.758434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.053 [2024-11-06 13:50:54.745719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.053 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:21:14.054 13:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:21:14.623 [2024-11-06 13:50:55.447345] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:14.623 [2024-11-06 13:50:55.447380] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:14.623 [2024-11-06 13:50:55.447401] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:14.623 [2024-11-06 13:50:55.534819] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:21:14.902 [2024-11-06 13:50:55.597102] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:21:14.902 [2024-11-06 13:50:55.597897] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1268ba0:1 started. 00:21:14.902 [2024-11-06 13:50:55.599764] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:14.902 [2024-11-06 13:50:55.599789] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:14.902 [2024-11-06 13:50:55.605958] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1268ba0 was disconnected and freed. delete nvme_qpair. 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:15.161 13:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:15.161 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.162 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.421 [2024-11-06 13:50:56.177424] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11e3260:1 started. 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:15.421 [2024-11-06 13:50:56.185052] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11e3260 was disconnected and freed. delete nvme_qpair. 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.421 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.421 [2024-11-06 13:50:56.296525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:15.422 [2024-11-06 13:50:56.296932] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:15.422 [2024-11-06 13:50:56.296958] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.422 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:15.681 [2024-11-06 13:50:56.384269] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.681 [2024-11-06 13:50:56.449498] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:21:15.681 [2024-11-06 13:50:56.449549] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:15.681 [2024-11-06 13:50:56.449559] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:15.681 [2024-11-06 13:50:56.449564] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:15.681 13:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:21:16.618 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:16.618 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:16.618 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:16.618 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:16.618 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.618 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.618 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:16.619 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.880 [2024-11-06 13:50:57.571772] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:16.880 [2024-11-06 13:50:57.571801] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:16.880 [2024-11-06 13:50:57.574078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.880 [2024-11-06 13:50:57.574112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.880 [2024-11-06 13:50:57.574124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.880 [2024-11-06 13:50:57.574133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.880 [2024-11-06 13:50:57.574143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.880 [2024-11-06 13:50:57.574152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.880 [2024-11-06 13:50:57.574161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.880 [2024-11-06 13:50:57.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.880 [2024-11-06 13:50:57.574181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123b280 is same with the state(6) to be set 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:16.880 [2024-11-06 13:50:57.584028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123b280 (9): Bad file descriptor 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:16.880 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.880 [2024-11-06 13:50:57.594026] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:16.880 [2024-11-06 13:50:57.594044] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:16.880 [2024-11-06 13:50:57.594050] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:16.880 [2024-11-06 13:50:57.594056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:16.880 [2024-11-06 13:50:57.594084] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:16.880 [2024-11-06 13:50:57.594152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.880 [2024-11-06 13:50:57.594168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123b280 with addr=10.0.0.3, port=4420 00:21:16.880 [2024-11-06 13:50:57.594179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123b280 is same with the state(6) to be set 00:21:16.880 [2024-11-06 13:50:57.594192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123b280 (9): Bad file descriptor 00:21:16.880 [2024-11-06 13:50:57.594206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:16.880 [2024-11-06 13:50:57.594214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:16.880 [2024-11-06 13:50:57.594226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:16.880 [2024-11-06 13:50:57.594234] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:16.880 [2024-11-06 13:50:57.594240] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:16.880 [2024-11-06 13:50:57.594246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:16.880 [2024-11-06 13:50:57.604076] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:16.880 [2024-11-06 13:50:57.604196] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:16.880 [2024-11-06 13:50:57.604207] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:16.880 [2024-11-06 13:50:57.604213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:16.880 [2024-11-06 13:50:57.604243] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:16.880 [2024-11-06 13:50:57.604298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.880 [2024-11-06 13:50:57.604313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123b280 with addr=10.0.0.3, port=4420 00:21:16.880 [2024-11-06 13:50:57.604323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123b280 is same with the state(6) to be set 00:21:16.880 [2024-11-06 13:50:57.604337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123b280 (9): Bad file descriptor 00:21:16.880 [2024-11-06 13:50:57.604350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:16.880 [2024-11-06 13:50:57.604359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:16.880 [2024-11-06 13:50:57.604369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:16.880 [2024-11-06 13:50:57.604377] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:16.880 [2024-11-06 13:50:57.604383] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:16.881 [2024-11-06 13:50:57.604388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.881 [2024-11-06 13:50:57.614236] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:16.881 [2024-11-06 13:50:57.614252] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:16.881 [2024-11-06 13:50:57.614258] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.614263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:16.881 [2024-11-06 13:50:57.614284] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.614325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.881 [2024-11-06 13:50:57.614339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123b280 with addr=10.0.0.3, port=4420 00:21:16.881 [2024-11-06 13:50:57.614348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123b280 is same with the state(6) to be set 00:21:16.881 [2024-11-06 13:50:57.614361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123b280 (9): Bad file descriptor 00:21:16.881 [2024-11-06 13:50:57.614373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:16.881 [2024-11-06 13:50:57.614381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:16.881 [2024-11-06 13:50:57.614390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:16.881 [2024-11-06 13:50:57.614397] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:16.881 [2024-11-06 13:50:57.614403] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:16.881 [2024-11-06 13:50:57.614408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:16.881 [2024-11-06 13:50:57.624277] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:16.881 [2024-11-06 13:50:57.624293] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:16.881 [2024-11-06 13:50:57.624299] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.624304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:16.881 [2024-11-06 13:50:57.624326] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.624365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.881 [2024-11-06 13:50:57.624378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123b280 with addr=10.0.0.3, port=4420 00:21:16.881 [2024-11-06 13:50:57.624388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123b280 is same with the state(6) to be set 00:21:16.881 [2024-11-06 13:50:57.624400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123b280 (9): Bad file descriptor 00:21:16.881 [2024-11-06 13:50:57.624413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:16.881 [2024-11-06 13:50:57.624421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:16.881 [2024-11-06 13:50:57.624430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:16.881 [2024-11-06 13:50:57.624437] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:16.881 [2024-11-06 13:50:57.624443] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:16.881 [2024-11-06 13:50:57.624448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:16.881 [2024-11-06 13:50:57.634318] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:16.881 [2024-11-06 13:50:57.634334] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:16.881 [2024-11-06 13:50:57.634339] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.634344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:16.881 [2024-11-06 13:50:57.634365] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.634405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.881 [2024-11-06 13:50:57.634418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123b280 with addr=10.0.0.3, port=4420 00:21:16.881 [2024-11-06 13:50:57.634427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123b280 is same with the state(6) to be set 00:21:16.881 [2024-11-06 13:50:57.634440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123b280 (9): Bad file descriptor 00:21:16.881 [2024-11-06 13:50:57.634451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:16.881 [2024-11-06 13:50:57.634459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:16.881 [2024-11-06 13:50:57.634468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:16.881 [2024-11-06 13:50:57.634475] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:16.881 [2024-11-06 13:50:57.634481] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:16.881 [2024-11-06 13:50:57.634485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.881 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.881 [2024-11-06 13:50:57.644356] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:16.881 [2024-11-06 13:50:57.644375] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:16.881 [2024-11-06 13:50:57.644381] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.644386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:16.881 [2024-11-06 13:50:57.644406] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.644446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.881 [2024-11-06 13:50:57.644459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123b280 with addr=10.0.0.3, port=4420 00:21:16.881 [2024-11-06 13:50:57.644468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123b280 is same with the state(6) to be set 00:21:16.881 [2024-11-06 13:50:57.644480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123b280 (9): Bad file descriptor 00:21:16.881 [2024-11-06 13:50:57.644492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:16.881 [2024-11-06 13:50:57.644500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:16.881 [2024-11-06 13:50:57.644509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:16.881 [2024-11-06 13:50:57.644516] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:16.881 [2024-11-06 13:50:57.644521] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:16.881 [2024-11-06 13:50:57.644526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:16.881 [2024-11-06 13:50:57.654399] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:16.881 [2024-11-06 13:50:57.654415] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:16.881 [2024-11-06 13:50:57.654421] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:16.881 [2024-11-06 13:50:57.654426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:16.882 [2024-11-06 13:50:57.654446] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:16.882 [2024-11-06 13:50:57.654487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:16.882 [2024-11-06 13:50:57.654500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123b280 with addr=10.0.0.3, port=4420 00:21:16.882 [2024-11-06 13:50:57.654510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123b280 is same with the state(6) to be set 00:21:16.882 [2024-11-06 13:50:57.654522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123b280 (9): Bad file descriptor 00:21:16.882 [2024-11-06 13:50:57.654534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:16.882 [2024-11-06 13:50:57.654542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:16.882 [2024-11-06 13:50:57.654551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:16.882 [2024-11-06 13:50:57.654558] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:16.882 [2024-11-06 13:50:57.654564] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:16.882 [2024-11-06 13:50:57.654569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:16.882 [2024-11-06 13:50:57.657684] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:21:16.882 [2024-11-06 13:50:57.657704] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.882 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.141 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.141 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:17.141 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.142 13:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.080 [2024-11-06 13:50:58.982106] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:18.080 [2024-11-06 13:50:58.982142] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:18.080 [2024-11-06 13:50:58.982156] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:18.339 [2024-11-06 13:50:59.068044] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:21:18.339 [2024-11-06 13:50:59.126343] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:21:18.339 [2024-11-06 13:50:59.126917] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1251b80:1 started. 00:21:18.339 [2024-11-06 13:50:59.129241] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:21:18.339 [2024-11-06 13:50:59.129276] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:18.339 [2024-11-06 13:50:59.130744] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1251b80 was disconnected and freed. delete nvme_qpair. 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.339 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.339 2024/11/06 13:50:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:18.339 request: 00:21:18.339 { 00:21:18.340 "method": "bdev_nvme_start_discovery", 00:21:18.340 "params": { 00:21:18.340 "name": "nvme", 00:21:18.340 "trtype": "tcp", 00:21:18.340 "traddr": "10.0.0.3", 00:21:18.340 "adrfam": "ipv4", 00:21:18.340 "trsvcid": "8009", 00:21:18.340 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:18.340 "wait_for_attach": true 00:21:18.340 } 00:21:18.340 } 00:21:18.340 Got JSON-RPC error response 00:21:18.340 GoRPCClient: error on JSON-RPC call 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.340 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.340 2024/11/06 13:50:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:18.340 request: 00:21:18.340 { 00:21:18.340 "method": "bdev_nvme_start_discovery", 00:21:18.600 "params": { 00:21:18.600 "name": "nvme_second", 00:21:18.600 "trtype": "tcp", 00:21:18.600 "traddr": "10.0.0.3", 00:21:18.600 "adrfam": "ipv4", 00:21:18.600 "trsvcid": "8009", 00:21:18.600 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:18.600 "wait_for_attach": true 00:21:18.600 } 00:21:18.600 } 00:21:18.600 Got JSON-RPC error response 00:21:18.600 GoRPCClient: error on JSON-RPC call 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.600 13:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.538 [2024-11-06 13:51:00.383537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.538 [2024-11-06 13:51:00.383612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123a3a0 with addr=10.0.0.3, port=8010 00:21:19.538 [2024-11-06 13:51:00.383640] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:19.538 [2024-11-06 13:51:00.383650] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:19.538 [2024-11-06 13:51:00.383660] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:21:20.475 [2024-11-06 13:51:01.381874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.475 [2024-11-06 13:51:01.381912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123a3a0 with addr=10.0.0.3, port=8010 00:21:20.475 [2024-11-06 13:51:01.381951] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:20.475 [2024-11-06 13:51:01.381961] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:20.475 [2024-11-06 13:51:01.381970] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:21:21.853 [2024-11-06 13:51:02.380152] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:21:21.853 request: 00:21:21.853 { 00:21:21.853 "method": "bdev_nvme_start_discovery", 00:21:21.853 "params": { 00:21:21.853 "name": "nvme_second", 00:21:21.853 "trtype": "tcp", 00:21:21.853 "traddr": "10.0.0.3", 00:21:21.853 "adrfam": "ipv4", 00:21:21.853 "trsvcid": "8010", 00:21:21.853 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:21.853 2024/11/06 13:51:02 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:21.853 "wait_for_attach": false, 00:21:21.853 "attach_timeout_ms": 3000 00:21:21.853 } 00:21:21.853 } 00:21:21.853 Got JSON-RPC error response 00:21:21.853 GoRPCClient: error on JSON-RPC call 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89801 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.853 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.853 rmmod nvme_tcp 00:21:21.853 rmmod nvme_fabrics 00:21:21.853 rmmod nvme_keyring 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 89751 ']' 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 89751 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 89751 ']' 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 89751 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89751 00:21:21.854 killing process with pid 89751 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89751' 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 89751 00:21:21.854 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 89751 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:22.113 13:51:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:22.113 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:21:22.372 00:21:22.372 real 0m11.579s 00:21:22.372 user 0m20.998s 00:21:22.372 sys 0m2.636s 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:22.372 ************************************ 00:21:22.372 END TEST nvmf_host_discovery 00:21:22.372 ************************************ 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:22.372 13:51:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:22.373 13:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:22.373 13:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:22.373 13:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.373 ************************************ 00:21:22.373 START TEST nvmf_host_multipath_status 00:21:22.373 ************************************ 00:21:22.373 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:22.636 * Looking for test storage... 00:21:22.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:22.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.636 --rc genhtml_branch_coverage=1 00:21:22.636 --rc genhtml_function_coverage=1 00:21:22.636 --rc genhtml_legend=1 00:21:22.636 --rc geninfo_all_blocks=1 00:21:22.636 --rc geninfo_unexecuted_blocks=1 00:21:22.636 00:21:22.636 ' 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:22.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.636 --rc genhtml_branch_coverage=1 00:21:22.636 --rc genhtml_function_coverage=1 00:21:22.636 --rc genhtml_legend=1 00:21:22.636 --rc geninfo_all_blocks=1 00:21:22.636 --rc geninfo_unexecuted_blocks=1 00:21:22.636 00:21:22.636 ' 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:22.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.636 --rc genhtml_branch_coverage=1 00:21:22.636 --rc genhtml_function_coverage=1 00:21:22.636 --rc genhtml_legend=1 00:21:22.636 --rc geninfo_all_blocks=1 00:21:22.636 --rc geninfo_unexecuted_blocks=1 00:21:22.636 00:21:22.636 ' 00:21:22.636 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:22.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.636 --rc genhtml_branch_coverage=1 00:21:22.636 --rc genhtml_function_coverage=1 00:21:22.636 --rc genhtml_legend=1 00:21:22.636 --rc geninfo_all_blocks=1 00:21:22.637 --rc geninfo_unexecuted_blocks=1 00:21:22.637 00:21:22.637 ' 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.637 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:22.637 Cannot find device "nvmf_init_br" 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:21:22.637 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:22.941 Cannot find device "nvmf_init_br2" 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:22.941 Cannot find device "nvmf_tgt_br" 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.941 Cannot find device "nvmf_tgt_br2" 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:22.941 Cannot find device "nvmf_init_br" 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:22.941 Cannot find device "nvmf_init_br2" 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:22.941 Cannot find device "nvmf_tgt_br" 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:21:22.941 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:22.941 Cannot find device "nvmf_tgt_br2" 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:22.942 Cannot find device "nvmf_br" 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:22.942 Cannot find device "nvmf_init_if" 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:22.942 Cannot find device "nvmf_init_if2" 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:22.942 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:23.201 13:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:23.201 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:23.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:23.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:21:23.201 00:21:23.201 --- 10.0.0.3 ping statistics --- 00:21:23.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.201 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:23.201 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:23.201 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:23.201 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:21:23.201 00:21:23.201 --- 10.0.0.4 ping statistics --- 00:21:23.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.201 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:23.201 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:23.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:21:23.201 00:21:23.201 --- 10.0.0.1 ping statistics --- 00:21:23.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.201 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:23.201 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:23.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:21:23.201 00:21:23.201 --- 10.0.0.2 ping statistics --- 00:21:23.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.202 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=90346 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 90346 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 90346 ']' 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:23.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:23.202 13:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:23.202 [2024-11-06 13:51:04.120384] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:21:23.202 [2024-11-06 13:51:04.120452] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.461 [2024-11-06 13:51:04.270139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:23.461 [2024-11-06 13:51:04.328310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.461 [2024-11-06 13:51:04.328380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.461 [2024-11-06 13:51:04.328396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.461 [2024-11-06 13:51:04.328410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.461 [2024-11-06 13:51:04.328421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.461 [2024-11-06 13:51:04.329955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.461 [2024-11-06 13:51:04.329956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90346 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:24.399 [2024-11-06 13:51:05.261062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.399 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:24.658 Malloc0 00:21:24.658 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:24.917 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:25.176 13:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:25.435 [2024-11-06 13:51:06.163549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:25.435 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:25.695 [2024-11-06 13:51:06.383411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90445 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90445 /var/tmp/bdevperf.sock 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 90445 ']' 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:25.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:25.695 13:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:26.633 13:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:26.633 13:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:21:26.633 13:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:26.892 13:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:27.150 Nvme0n1 00:21:27.150 13:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:27.409 Nvme0n1 00:21:27.409 13:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:27.409 13:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:29.952 13:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:29.952 13:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:29.952 13:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:29.952 13:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:30.889 13:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:30.889 13:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:30.889 13:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.889 13:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:31.151 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.151 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:31.151 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:31.151 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.453 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:31.453 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:31.453 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.454 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:31.713 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.713 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:31.713 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.713 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:31.972 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.972 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:31.972 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:31.972 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.235 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.235 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:32.235 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.235 13:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:32.235 13:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.235 13:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:32.235 13:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:32.496 13:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:32.755 13:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:33.691 13:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:33.692 13:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:33.692 13:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.692 13:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:33.951 13:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:33.951 13:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:33.951 13:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.951 13:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:34.210 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.210 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:34.210 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.210 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:34.469 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.469 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:34.469 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.469 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:34.728 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.728 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:34.728 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.728 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:34.987 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.987 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:34.987 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.987 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:34.987 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.987 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:34.987 13:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:35.246 13:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:35.505 13:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:36.442 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:36.442 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:36.442 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.442 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:36.701 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.701 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:36.701 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.701 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:36.960 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:36.960 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:36.960 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.960 13:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:37.219 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.219 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:37.219 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.219 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:37.479 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.479 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:37.479 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.479 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:37.738 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.738 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:37.738 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.738 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:37.997 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.997 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:37.997 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:38.257 13:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:38.517 13:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:39.455 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:39.455 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:39.455 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.455 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:39.715 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.715 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:39.715 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.715 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:39.976 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.976 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:39.976 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.976 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:39.976 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.976 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:39.976 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:39.976 13:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.237 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.237 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:40.237 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.237 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:40.498 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.498 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:40.498 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.498 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:40.759 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:40.759 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:40.759 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:41.019 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:41.279 13:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:42.220 13:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:42.220 13:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:42.220 13:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.220 13:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:42.480 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:42.480 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:42.480 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.480 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:42.741 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:42.741 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:42.741 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.741 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:42.741 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.741 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:42.741 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.741 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:43.002 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.002 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:43.002 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.002 13:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:43.262 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:43.262 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:43.262 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.262 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:43.522 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:43.522 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:43.522 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:43.782 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:44.042 13:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:44.984 13:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:44.984 13:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:44.984 13:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.984 13:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:45.243 13:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:45.243 13:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:45.243 13:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.243 13:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:45.502 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.502 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:45.502 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:45.502 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.761 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.761 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:45.761 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.761 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:45.761 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.761 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:45.761 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:45.761 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.021 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:46.021 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:46.021 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.021 13:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:46.281 13:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.281 13:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:46.539 13:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:46.539 13:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:46.798 13:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:47.073 13:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:48.010 13:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:48.010 13:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:48.010 13:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.010 13:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:48.271 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.271 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:48.271 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.271 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:48.530 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.530 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:48.530 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.530 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:48.801 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.801 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:48.801 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.801 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:48.801 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.801 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:48.801 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.801 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:49.061 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.061 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:49.061 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.061 13:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:49.319 13:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.319 13:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:49.319 13:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:49.578 13:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:49.836 13:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:50.775 13:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:50.775 13:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:50.775 13:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.775 13:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:51.037 13:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:51.037 13:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:51.037 13:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.037 13:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:51.296 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.296 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:51.296 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.296 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:51.556 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.556 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:51.556 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:51.556 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.556 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.556 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:51.556 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.556 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:51.815 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.815 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:51.815 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.815 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:52.074 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.074 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:52.074 13:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:52.333 13:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:21:52.591 13:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:53.527 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:53.527 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:53.527 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.527 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:53.785 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.785 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:53.785 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:53.785 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:54.044 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.044 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:54.044 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.044 13:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:54.302 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.302 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:54.302 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.302 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:54.561 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.561 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:54.561 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.561 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:54.853 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.853 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:54.853 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.853 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:54.853 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.853 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:54.853 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:55.112 13:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:55.371 13:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:56.308 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:56.308 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:56.308 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.308 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:56.567 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.567 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:56.567 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.567 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:56.826 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:56.826 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:56.826 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:56.826 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.085 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.085 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:57.085 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.085 13:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:57.344 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.344 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:57.344 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.344 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:57.603 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.603 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:57.603 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.603 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90445 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 90445 ']' 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 90445 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90445 00:21:57.863 killing process with pid 90445 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90445' 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 90445 00:21:57.863 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 90445 00:21:57.863 { 00:21:57.863 "results": [ 00:21:57.863 { 00:21:57.863 "job": "Nvme0n1", 00:21:57.863 "core_mask": "0x4", 00:21:57.863 "workload": "verify", 00:21:57.863 "status": "terminated", 00:21:57.863 "verify_range": { 00:21:57.863 "start": 0, 00:21:57.863 "length": 16384 00:21:57.863 }, 00:21:57.863 "queue_depth": 128, 00:21:57.863 "io_size": 4096, 00:21:57.863 "runtime": 30.312721, 00:21:57.863 "iops": 11531.033456217936, 00:21:57.863 "mibps": 45.04309943835131, 00:21:57.863 "io_failed": 0, 00:21:57.863 "io_timeout": 0, 00:21:57.863 "avg_latency_us": 11079.580461483225, 00:21:57.863 "min_latency_us": 269.7767068273092, 00:21:57.863 "max_latency_us": 4096605.352610442 00:21:57.863 } 00:21:57.863 ], 00:21:57.863 "core_count": 1 00:21:57.863 } 00:21:58.143 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90445 00:21:58.143 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:58.143 [2024-11-06 13:51:06.446581] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:21:58.143 [2024-11-06 13:51:06.446670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90445 ] 00:21:58.143 [2024-11-06 13:51:06.601216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.143 [2024-11-06 13:51:06.664708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.143 Running I/O for 90 seconds... 00:21:58.143 12029.00 IOPS, 46.99 MiB/s [2024-11-06T13:51:39.076Z] 12084.00 IOPS, 47.20 MiB/s [2024-11-06T13:51:39.076Z] 12101.67 IOPS, 47.27 MiB/s [2024-11-06T13:51:39.076Z] 12036.50 IOPS, 47.02 MiB/s [2024-11-06T13:51:39.076Z] 12056.00 IOPS, 47.09 MiB/s [2024-11-06T13:51:39.076Z] 12057.33 IOPS, 47.10 MiB/s [2024-11-06T13:51:39.076Z] 12065.29 IOPS, 47.13 MiB/s [2024-11-06T13:51:39.076Z] 12069.50 IOPS, 47.15 MiB/s [2024-11-06T13:51:39.076Z] 12089.33 IOPS, 47.22 MiB/s [2024-11-06T13:51:39.076Z] 12091.20 IOPS, 47.23 MiB/s [2024-11-06T13:51:39.076Z] 12105.36 IOPS, 47.29 MiB/s [2024-11-06T13:51:39.076Z] 12116.67 IOPS, 47.33 MiB/s [2024-11-06T13:51:39.076Z] 12123.62 IOPS, 47.36 MiB/s [2024-11-06T13:51:39.076Z] [2024-11-06 13:51:21.737599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.737954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.737973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.738450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.738464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.739034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.739064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.739087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.739102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.739121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.739135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.143 [2024-11-06 13:51:21.739154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.143 [2024-11-06 13:51:21.739167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.739208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.739239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.739271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.739303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.144 [2024-11-06 13:51:21.739930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.739962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.739981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.739994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.144 [2024-11-06 13:51:21.740476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.144 [2024-11-06 13:51:21.740494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.740971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.740989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.741980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.741998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.145 [2024-11-06 13:51:21.742268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.145 [2024-11-06 13:51:21.742287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.742300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.742331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.742368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.742399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.742431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.742463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.742495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.742974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.146 [2024-11-06 13:51:21.742987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.146 [2024-11-06 13:51:21.743529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.146 [2024-11-06 13:51:21.743548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.743566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.743579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.743598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.743611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.743629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.743643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.743662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.743675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.743695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.743708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.743726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.765055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.147 [2024-11-06 13:51:21.766804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.766880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.766939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.766973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.766997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.147 [2024-11-06 13:51:21.767714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.147 [2024-11-06 13:51:21.767747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.148 [2024-11-06 13:51:21.767771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.767804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.148 [2024-11-06 13:51:21.767828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.767878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.767904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.767937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.767960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.767994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.768944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.768969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.769701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.769725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.770654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.770692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.770730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.770754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.770789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.770813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.770847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.148 [2024-11-06 13:51:21.770887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.148 [2024-11-06 13:51:21.770920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.770944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.770978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.771953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.771976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.149 [2024-11-06 13:51:21.772490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.772547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.772613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.772670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.772728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.772785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.772842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.772912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.772945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.772969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.773002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.773026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.773059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.773083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.149 [2024-11-06 13:51:21.773116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.149 [2024-11-06 13:51:21.773140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.150 [2024-11-06 13:51:21.773197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.150 [2024-11-06 13:51:21.773254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.150 [2024-11-06 13:51:21.773321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.150 [2024-11-06 13:51:21.773379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.773965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.773999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.774640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.774664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.775734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.775772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.775829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.775860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.775894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.775911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.775933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.775949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.775971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.775987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.776009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.776025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.776047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.776063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.776085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.776101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.776123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.776138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.776160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.776176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.776198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.150 [2024-11-06 13:51:21.776214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.776236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.150 [2024-11-06 13:51:21.776252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.150 [2024-11-06 13:51:21.776274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.151 [2024-11-06 13:51:21.776910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.776948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.776970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.776986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.151 [2024-11-06 13:51:21.777777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.151 [2024-11-06 13:51:21.777793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.777815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.777831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.777880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.777902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.777918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.777940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.777955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.777977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.777993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.778970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.778992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.152 [2024-11-06 13:51:21.779728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.152 [2024-11-06 13:51:21.779744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.779766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.779782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.779804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.779825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.779847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.779872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.779894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.779910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.779932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.779947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.779969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.779985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.153 [2024-11-06 13:51:21.780559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.780976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.780992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.781014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.781030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.781052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.781068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.781090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.781105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.781127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.781143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.781165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.781181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.781202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.781218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.153 [2024-11-06 13:51:21.781247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.153 [2024-11-06 13:51:21.781263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.781284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.781300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.781322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.781338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.781360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.781376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.782540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.782974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.782996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.783012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.783050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.783088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.783126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.783164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.154 [2024-11-06 13:51:21.783210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.783248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.783285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.783323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.783368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.783407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.154 [2024-11-06 13:51:21.783445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.154 [2024-11-06 13:51:21.783467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.783974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.783990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.784378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.784981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.155 [2024-11-06 13:51:21.785546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.155 [2024-11-06 13:51:21.785559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.785969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.785982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.786014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.786045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.786076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.786107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.156 [2024-11-06 13:51:21.786139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.156 [2024-11-06 13:51:21.786610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.156 [2024-11-06 13:51:21.786623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.786965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.786978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.787983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.787996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.157 [2024-11-06 13:51:21.788280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.157 [2024-11-06 13:51:21.788312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.157 [2024-11-06 13:51:21.788343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.157 [2024-11-06 13:51:21.788380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.157 [2024-11-06 13:51:21.788411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.157 [2024-11-06 13:51:21.788430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.157 [2024-11-06 13:51:21.788443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.158 [2024-11-06 13:51:21.788829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.788871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.788904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.788935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.788967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.788985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.788998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.158 [2024-11-06 13:51:21.789686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.158 [2024-11-06 13:51:21.789699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.789717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.789731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.789749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.789762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.790981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.790994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.159 [2024-11-06 13:51:21.791386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.159 [2024-11-06 13:51:21.791417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.159 [2024-11-06 13:51:21.791448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.159 [2024-11-06 13:51:21.791467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.159 [2024-11-06 13:51:21.791480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.160 [2024-11-06 13:51:21.791870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.791902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.791934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.791965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.791983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.791996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.792468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.792481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.793070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.793093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.793114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.793127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.793145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.793159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.793178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.793191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.793209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.793222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.793240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.793254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.793272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.160 [2024-11-06 13:51:21.793285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.160 [2024-11-06 13:51:21.793303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.793317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.793349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.793380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.793420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.793452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.793483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.793515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.793973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.793992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.794005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.794036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.161 [2024-11-06 13:51:21.794068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.161 [2024-11-06 13:51:21.794405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.161 [2024-11-06 13:51:21.794419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.794953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.795976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.795989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.796007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.796020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.796039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.796052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.796070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.796083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.796101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.796114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.796132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.162 [2024-11-06 13:51:21.796145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.162 [2024-11-06 13:51:21.796164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.796592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.796974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.796988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.797019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.797050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.163 [2024-11-06 13:51:21.797088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.163 [2024-11-06 13:51:21.797390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.163 [2024-11-06 13:51:21.797403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.797421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.797434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.797452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.797475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.797494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.797507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.797525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.797538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.797556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.797569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.797588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.797601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.797619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.797632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.797651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.797664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.164 [2024-11-06 13:51:21.798730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.798761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.798793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.798824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.798872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.798904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.798935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.798967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.798986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.798999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.799017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.799030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.799049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.799062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.799080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.799093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.799112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.799125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.799143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.799156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.799182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.799196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.799214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.799227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.164 [2024-11-06 13:51:21.799250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.164 [2024-11-06 13:51:21.799264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.165 [2024-11-06 13:51:21.799295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.799972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.799991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.165 [2024-11-06 13:51:21.800870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.165 [2024-11-06 13:51:21.800889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.800902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.800921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.800942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.800960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.800973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.800991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.166 [2024-11-06 13:51:21.801831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.801870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.801902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.801933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.801964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.801983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.801995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.802014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.802027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.802045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.802058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.802077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.802090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.802108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.166 [2024-11-06 13:51:21.802127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.166 [2024-11-06 13:51:21.802146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.167 [2024-11-06 13:51:21.802159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.167 [2024-11-06 13:51:21.802190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.167 [2024-11-06 13:51:21.802222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.167 [2024-11-06 13:51:21.802254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.167 [2024-11-06 13:51:21.802285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.167 [2024-11-06 13:51:21.802316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.802843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.802856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.167 [2024-11-06 13:51:21.803966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.167 [2024-11-06 13:51:21.803979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.803997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.168 [2024-11-06 13:51:21.804523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.804980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.804999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.805012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.805030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.805043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.805062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.805080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.805098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.805111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.805129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.805143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.805161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.805176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.805194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.805207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.168 [2024-11-06 13:51:21.805226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.168 [2024-11-06 13:51:21.805239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.805257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.805271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.805289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.805302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.805321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.805334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.805353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.805366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.805871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.805894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.805914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.805928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.805946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.805967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.805986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.169 [2024-11-06 13:51:21.806902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.169 [2024-11-06 13:51:21.806920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.806933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.806951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.806964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.806983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.806996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.170 [2024-11-06 13:51:21.807551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.807972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.807990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.808003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.808021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.808034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.808053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.808066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.808642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.170 [2024-11-06 13:51:21.808664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.170 [2024-11-06 13:51:21.808685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.808974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.808992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.171 [2024-11-06 13:51:21.809741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.171 [2024-11-06 13:51:21.809924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.171 [2024-11-06 13:51:21.809937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.809956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.809969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.809987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.810986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.810999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.172 [2024-11-06 13:51:21.811431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.172 [2024-11-06 13:51:21.811452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.811964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.811985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.173 [2024-11-06 13:51:21.812628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.173 [2024-11-06 13:51:21.812787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.173 [2024-11-06 13:51:21.812800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.812821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.812834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.812869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.812883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.812904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.812917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.812938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.812951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.812972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.812986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.813007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.813020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.813041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.813054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.813075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.813088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.813109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.813122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.813144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.813157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:21.813317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:21.813334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.174 11607.50 IOPS, 45.34 MiB/s [2024-11-06T13:51:39.107Z] 10833.67 IOPS, 42.32 MiB/s [2024-11-06T13:51:39.107Z] 10156.56 IOPS, 39.67 MiB/s [2024-11-06T13:51:39.107Z] 9915.12 IOPS, 38.73 MiB/s [2024-11-06T13:51:39.107Z] 10017.67 IOPS, 39.13 MiB/s [2024-11-06T13:51:39.107Z] 10124.21 IOPS, 39.55 MiB/s [2024-11-06T13:51:39.107Z] 10367.65 IOPS, 40.50 MiB/s [2024-11-06T13:51:39.107Z] 10602.19 IOPS, 41.41 MiB/s [2024-11-06T13:51:39.107Z] 10848.14 IOPS, 42.38 MiB/s [2024-11-06T13:51:39.107Z] 10911.57 IOPS, 42.62 MiB/s [2024-11-06T13:51:39.107Z] 10961.83 IOPS, 42.82 MiB/s [2024-11-06T13:51:39.107Z] 11006.84 IOPS, 43.00 MiB/s [2024-11-06T13:51:39.107Z] 11188.15 IOPS, 43.70 MiB/s [2024-11-06T13:51:39.107Z] 11343.44 IOPS, 44.31 MiB/s [2024-11-06T13:51:39.107Z] [2024-11-06 13:51:36.176247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68224 len:8 13:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.174 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.174 [2024-11-06 13:51:36.176639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.174 [2024-11-06 13:51:36.176671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.174 [2024-11-06 13:51:36.176702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.176879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.174 [2024-11-06 13:51:36.176911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.174 [2024-11-06 13:51:36.176946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.174 [2024-11-06 13:51:36.176977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.176995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.174 [2024-11-06 13:51:36.177008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.177347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.177373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.177397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.177410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.177429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.177443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.177462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.174 [2024-11-06 13:51:36.177476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.174 [2024-11-06 13:51:36.177495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.177509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.177828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.177872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.177975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.177993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.178420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.178532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.175 [2024-11-06 13:51:36.178545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.175 [2024-11-06 13:51:36.179663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.175 [2024-11-06 13:51:36.179676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.176 11479.89 IOPS, 44.84 MiB/s [2024-11-06T13:51:39.109Z] 11505.83 IOPS, 44.94 MiB/s [2024-11-06T13:51:39.109Z] 11525.73 IOPS, 45.02 MiB/s [2024-11-06T13:51:39.109Z] Received shutdown signal, test time was about 30.313373 seconds 00:21:58.176 00:21:58.176 Latency(us) 00:21:58.176 [2024-11-06T13:51:39.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.176 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.176 Verification LBA range: start 0x0 length 0x4000 00:21:58.176 Nvme0n1 : 30.31 11531.03 45.04 0.00 0.00 11079.58 269.78 4096605.35 00:21:58.176 [2024-11-06T13:51:39.109Z] =================================================================================================================== 00:21:58.176 [2024-11-06T13:51:39.109Z] Total : 11531.03 45.04 0.00 0.00 11079.58 269.78 4096605.35 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.435 rmmod nvme_tcp 00:21:58.435 rmmod nvme_fabrics 00:21:58.435 rmmod nvme_keyring 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 90346 ']' 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 90346 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 90346 ']' 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 90346 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90346 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:58.435 killing process with pid 90346 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90346' 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 90346 00:21:58.435 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 90346 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:58.695 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:21:58.954 00:21:58.954 real 0m36.636s 00:21:58.954 user 1m53.420s 00:21:58.954 sys 0m12.271s 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:58.954 13:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:58.954 ************************************ 00:21:58.954 END TEST nvmf_host_multipath_status 00:21:58.954 ************************************ 00:21:59.213 13:51:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:59.213 13:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:59.213 13:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:59.213 13:51:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.213 ************************************ 00:21:59.213 START TEST nvmf_discovery_remove_ifc 00:21:59.213 ************************************ 00:21:59.213 13:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:59.213 * Looking for test storage... 00:21:59.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.213 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:59.213 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:21:59.213 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:59.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.474 --rc genhtml_branch_coverage=1 00:21:59.474 --rc genhtml_function_coverage=1 00:21:59.474 --rc genhtml_legend=1 00:21:59.474 --rc geninfo_all_blocks=1 00:21:59.474 --rc geninfo_unexecuted_blocks=1 00:21:59.474 00:21:59.474 ' 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:59.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.474 --rc genhtml_branch_coverage=1 00:21:59.474 --rc genhtml_function_coverage=1 00:21:59.474 --rc genhtml_legend=1 00:21:59.474 --rc geninfo_all_blocks=1 00:21:59.474 --rc geninfo_unexecuted_blocks=1 00:21:59.474 00:21:59.474 ' 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:59.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.474 --rc genhtml_branch_coverage=1 00:21:59.474 --rc genhtml_function_coverage=1 00:21:59.474 --rc genhtml_legend=1 00:21:59.474 --rc geninfo_all_blocks=1 00:21:59.474 --rc geninfo_unexecuted_blocks=1 00:21:59.474 00:21:59.474 ' 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:59.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.474 --rc genhtml_branch_coverage=1 00:21:59.474 --rc genhtml_function_coverage=1 00:21:59.474 --rc genhtml_legend=1 00:21:59.474 --rc geninfo_all_blocks=1 00:21:59.474 --rc geninfo_unexecuted_blocks=1 00:21:59.474 00:21:59.474 ' 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:21:59.474 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.475 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:59.475 Cannot find device "nvmf_init_br" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:59.475 Cannot find device "nvmf_init_br2" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:59.475 Cannot find device "nvmf_tgt_br" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.475 Cannot find device "nvmf_tgt_br2" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:59.475 Cannot find device "nvmf_init_br" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:59.475 Cannot find device "nvmf_init_br2" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:59.475 Cannot find device "nvmf_tgt_br" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:59.475 Cannot find device "nvmf_tgt_br2" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:59.475 Cannot find device "nvmf_br" 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:21:59.475 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:59.735 Cannot find device "nvmf_init_if" 00:21:59.735 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:21:59.735 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:59.735 Cannot find device "nvmf_init_if2" 00:21:59.735 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.736 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:59.995 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:59.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:21:59.995 00:21:59.995 --- 10.0.0.3 ping statistics --- 00:21:59.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.995 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:59.995 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:59.995 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:59.995 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:21:59.995 00:21:59.995 --- 10.0.0.4 ping statistics --- 00:21:59.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.995 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:59.995 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:21:59.995 00:21:59.995 --- 10.0.0.1 ping statistics --- 00:21:59.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.996 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:59.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:21:59.996 00:21:59.996 --- 10.0.0.2 ping statistics --- 00:21:59.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.996 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91777 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91777 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 91777 ']' 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:59.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:59.996 13:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:59.996 [2024-11-06 13:51:40.804292] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:21:59.996 [2024-11-06 13:51:40.804796] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.255 [2024-11-06 13:51:40.954896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.255 [2024-11-06 13:51:41.011514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.255 [2024-11-06 13:51:41.011586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.255 [2024-11-06 13:51:41.011597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.255 [2024-11-06 13:51:41.011606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.255 [2024-11-06 13:51:41.011613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.255 [2024-11-06 13:51:41.011992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.824 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:00.824 [2024-11-06 13:51:41.755421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.083 [2024-11-06 13:51:41.763565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:01.083 null0 00:22:01.083 [2024-11-06 13:51:41.795427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91827 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91827 /tmp/host.sock 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 91827 ']' 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:01.083 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:01.083 13:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:01.083 [2024-11-06 13:51:41.875023] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:22:01.083 [2024-11-06 13:51:41.875477] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91827 ] 00:22:01.342 [2024-11-06 13:51:42.029315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.342 [2024-11-06 13:51:42.088791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.934 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:02.193 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.193 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:02.193 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.193 13:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.130 [2024-11-06 13:51:43.924841] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:03.130 [2024-11-06 13:51:43.924877] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:03.130 [2024-11-06 13:51:43.924893] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:03.130 [2024-11-06 13:51:44.011785] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:03.389 [2024-11-06 13:51:44.066103] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:03.389 [2024-11-06 13:51:44.066898] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d921b0:1 started. 00:22:03.389 [2024-11-06 13:51:44.068761] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:03.389 [2024-11-06 13:51:44.068825] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:03.389 [2024-11-06 13:51:44.068844] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:03.389 [2024-11-06 13:51:44.068870] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:03.389 [2024-11-06 13:51:44.068910] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:03.389 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:03.390 [2024-11-06 13:51:44.073168] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d921b0 was disconnected and freed. delete nvme_qpair. 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.390 13:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:04.328 13:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.706 13:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:06.644 13:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.580 13:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.517 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.517 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.517 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.517 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.517 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:08.517 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.517 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.777 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.777 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:08.777 13:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.777 [2024-11-06 13:51:49.498208] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:08.777 [2024-11-06 13:51:49.498269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.777 [2024-11-06 13:51:49.498284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.777 [2024-11-06 13:51:49.498297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.777 [2024-11-06 13:51:49.498306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.777 [2024-11-06 13:51:49.498316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.777 [2024-11-06 13:51:49.498325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.777 [2024-11-06 13:51:49.498335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.777 [2024-11-06 13:51:49.498343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.777 [2024-11-06 13:51:49.498353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.777 [2024-11-06 13:51:49.498362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.777 [2024-11-06 13:51:49.498372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6f520 is same with the state(6) to be set 00:22:08.777 [2024-11-06 13:51:49.508186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6f520 (9): Bad file descriptor 00:22:08.777 [2024-11-06 13:51:49.518192] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:08.777 [2024-11-06 13:51:49.518206] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:08.777 [2024-11-06 13:51:49.518211] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:08.777 [2024-11-06 13:51:49.518218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:08.777 [2024-11-06 13:51:49.518251] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.714 [2024-11-06 13:51:50.558948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:09.714 [2024-11-06 13:51:50.559084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6f520 with addr=10.0.0.3, port=4420 00:22:09.714 [2024-11-06 13:51:50.559132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6f520 is same with the state(6) to be set 00:22:09.714 [2024-11-06 13:51:50.559251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6f520 (9): Bad file descriptor 00:22:09.714 [2024-11-06 13:51:50.560327] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:22:09.714 [2024-11-06 13:51:50.560437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:09.714 [2024-11-06 13:51:50.560469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:09.714 [2024-11-06 13:51:50.560501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:09.714 [2024-11-06 13:51:50.560530] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:09.714 [2024-11-06 13:51:50.560551] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:09.714 [2024-11-06 13:51:50.560568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:09.714 [2024-11-06 13:51:50.560598] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:09.714 [2024-11-06 13:51:50.560616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:09.714 13:51:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.662 [2024-11-06 13:51:51.559094] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:10.663 [2024-11-06 13:51:51.559126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:10.663 [2024-11-06 13:51:51.559149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:10.663 [2024-11-06 13:51:51.559160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:10.663 [2024-11-06 13:51:51.559170] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:22:10.663 [2024-11-06 13:51:51.559185] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:10.663 [2024-11-06 13:51:51.559192] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:10.663 [2024-11-06 13:51:51.559198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:10.663 [2024-11-06 13:51:51.559249] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:22:10.663 [2024-11-06 13:51:51.559293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.663 [2024-11-06 13:51:51.559306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.663 [2024-11-06 13:51:51.559320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.663 [2024-11-06 13:51:51.559330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.663 [2024-11-06 13:51:51.559340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.663 [2024-11-06 13:51:51.559349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.663 [2024-11-06 13:51:51.559359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.663 [2024-11-06 13:51:51.559368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.663 [2024-11-06 13:51:51.559378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.663 [2024-11-06 13:51:51.559387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.663 [2024-11-06 13:51:51.559395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:22:10.663 [2024-11-06 13:51:51.559908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfc2d0 (9): Bad file descriptor 00:22:10.663 [2024-11-06 13:51:51.560916] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:10.663 [2024-11-06 13:51:51.560933] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:10.922 13:51:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:11.860 13:51:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:12.796 [2024-11-06 13:51:53.560996] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:12.796 [2024-11-06 13:51:53.561019] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:12.796 [2024-11-06 13:51:53.561047] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:12.797 [2024-11-06 13:51:53.646942] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:22:12.797 [2024-11-06 13:51:53.701347] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:22:12.797 [2024-11-06 13:51:53.702011] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1d68fa0:1 started. 00:22:12.797 [2024-11-06 13:51:53.703226] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:12.797 [2024-11-06 13:51:53.703264] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:12.797 [2024-11-06 13:51:53.703285] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:12.797 [2024-11-06 13:51:53.703301] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:22:12.797 [2024-11-06 13:51:53.703310] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:12.797 [2024-11-06 13:51:53.709537] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1d68fa0 was disconnected and freed. delete nvme_qpair. 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91827 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 91827 ']' 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 91827 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91827 00:22:13.056 killing process with pid 91827 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91827' 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 91827 00:22:13.056 13:51:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 91827 00:22:13.316 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:13.316 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.316 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:22:13.316 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.316 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:22:13.316 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.316 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.316 rmmod nvme_tcp 00:22:13.316 rmmod nvme_fabrics 00:22:13.575 rmmod nvme_keyring 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91777 ']' 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91777 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 91777 ']' 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 91777 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91777 00:22:13.575 killing process with pid 91777 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91777' 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 91777 00:22:13.575 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 91777 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:13.835 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:22:14.095 ************************************ 00:22:14.095 END TEST nvmf_discovery_remove_ifc 00:22:14.095 ************************************ 00:22:14.095 00:22:14.095 real 0m14.928s 00:22:14.095 user 0m24.978s 00:22:14.095 sys 0m2.864s 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.095 ************************************ 00:22:14.095 START TEST nvmf_identify_kernel_target 00:22:14.095 ************************************ 00:22:14.095 13:51:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:14.356 * Looking for test storage... 00:22:14.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:14.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.356 --rc genhtml_branch_coverage=1 00:22:14.356 --rc genhtml_function_coverage=1 00:22:14.356 --rc genhtml_legend=1 00:22:14.356 --rc geninfo_all_blocks=1 00:22:14.356 --rc geninfo_unexecuted_blocks=1 00:22:14.356 00:22:14.356 ' 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:14.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.356 --rc genhtml_branch_coverage=1 00:22:14.356 --rc genhtml_function_coverage=1 00:22:14.356 --rc genhtml_legend=1 00:22:14.356 --rc geninfo_all_blocks=1 00:22:14.356 --rc geninfo_unexecuted_blocks=1 00:22:14.356 00:22:14.356 ' 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:14.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.356 --rc genhtml_branch_coverage=1 00:22:14.356 --rc genhtml_function_coverage=1 00:22:14.356 --rc genhtml_legend=1 00:22:14.356 --rc geninfo_all_blocks=1 00:22:14.356 --rc geninfo_unexecuted_blocks=1 00:22:14.356 00:22:14.356 ' 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:14.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.356 --rc genhtml_branch_coverage=1 00:22:14.356 --rc genhtml_function_coverage=1 00:22:14.356 --rc genhtml_legend=1 00:22:14.356 --rc geninfo_all_blocks=1 00:22:14.356 --rc geninfo_unexecuted_blocks=1 00:22:14.356 00:22:14.356 ' 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.356 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:14.357 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:14.357 Cannot find device "nvmf_init_br" 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:14.357 Cannot find device "nvmf_init_br2" 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:22:14.357 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:14.616 Cannot find device "nvmf_tgt_br" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:14.616 Cannot find device "nvmf_tgt_br2" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:14.616 Cannot find device "nvmf_init_br" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:14.616 Cannot find device "nvmf_init_br2" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:14.616 Cannot find device "nvmf_tgt_br" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:14.616 Cannot find device "nvmf_tgt_br2" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:14.616 Cannot find device "nvmf_br" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:14.616 Cannot find device "nvmf_init_if" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:14.616 Cannot find device "nvmf_init_if2" 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:14.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:14.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:14.616 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:14.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:14.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:22:14.876 00:22:14.876 --- 10.0.0.3 ping statistics --- 00:22:14.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.876 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:14.876 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:14.876 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:22:14.876 00:22:14.876 --- 10.0.0.4 ping statistics --- 00:22:14.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.876 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:14.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:22:14.876 00:22:14.876 --- 10.0.0.1 ping statistics --- 00:22:14.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.876 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:14.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:22:14.876 00:22:14.876 --- 10.0.0.2 ping statistics --- 00:22:14.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.876 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:14.876 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:14.877 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:14.877 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:14.877 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:22:14.877 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:14.877 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:14.877 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:14.877 13:51:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:15.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:15.445 Waiting for block devices as requested 00:22:15.705 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:15.705 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:15.705 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:15.964 No valid GPT data, bailing 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:15.964 No valid GPT data, bailing 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:15.964 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:15.965 No valid GPT data, bailing 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:15.965 No valid GPT data, bailing 00:22:15.965 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:16.227 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -a 10.0.0.1 -t tcp -s 4420 00:22:16.227 00:22:16.227 Discovery Log Number of Records 2, Generation counter 2 00:22:16.227 =====Discovery Log Entry 0====== 00:22:16.227 trtype: tcp 00:22:16.227 adrfam: ipv4 00:22:16.227 subtype: current discovery subsystem 00:22:16.227 treq: not specified, sq flow control disable supported 00:22:16.227 portid: 1 00:22:16.227 trsvcid: 4420 00:22:16.227 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:16.227 traddr: 10.0.0.1 00:22:16.227 eflags: none 00:22:16.227 sectype: none 00:22:16.228 =====Discovery Log Entry 1====== 00:22:16.228 trtype: tcp 00:22:16.228 adrfam: ipv4 00:22:16.228 subtype: nvme subsystem 00:22:16.228 treq: not specified, sq flow control disable supported 00:22:16.228 portid: 1 00:22:16.228 trsvcid: 4420 00:22:16.228 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:16.228 traddr: 10.0.0.1 00:22:16.228 eflags: none 00:22:16.228 sectype: none 00:22:16.228 13:51:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:16.228 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:16.489 ===================================================== 00:22:16.489 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:16.489 ===================================================== 00:22:16.489 Controller Capabilities/Features 00:22:16.489 ================================ 00:22:16.489 Vendor ID: 0000 00:22:16.489 Subsystem Vendor ID: 0000 00:22:16.489 Serial Number: d1f71b9ff745cffb84f2 00:22:16.489 Model Number: Linux 00:22:16.489 Firmware Version: 6.8.9-20 00:22:16.489 Recommended Arb Burst: 0 00:22:16.489 IEEE OUI Identifier: 00 00 00 00:22:16.489 Multi-path I/O 00:22:16.489 May have multiple subsystem ports: No 00:22:16.489 May have multiple controllers: No 00:22:16.489 Associated with SR-IOV VF: No 00:22:16.489 Max Data Transfer Size: Unlimited 00:22:16.489 Max Number of Namespaces: 0 00:22:16.489 Max Number of I/O Queues: 1024 00:22:16.489 NVMe Specification Version (VS): 1.3 00:22:16.489 NVMe Specification Version (Identify): 1.3 00:22:16.489 Maximum Queue Entries: 1024 00:22:16.489 Contiguous Queues Required: No 00:22:16.489 Arbitration Mechanisms Supported 00:22:16.489 Weighted Round Robin: Not Supported 00:22:16.489 Vendor Specific: Not Supported 00:22:16.489 Reset Timeout: 7500 ms 00:22:16.489 Doorbell Stride: 4 bytes 00:22:16.489 NVM Subsystem Reset: Not Supported 00:22:16.489 Command Sets Supported 00:22:16.489 NVM Command Set: Supported 00:22:16.489 Boot Partition: Not Supported 00:22:16.489 Memory Page Size Minimum: 4096 bytes 00:22:16.489 Memory Page Size Maximum: 4096 bytes 00:22:16.489 Persistent Memory Region: Not Supported 00:22:16.489 Optional Asynchronous Events Supported 00:22:16.489 Namespace Attribute Notices: Not Supported 00:22:16.489 Firmware Activation Notices: Not Supported 00:22:16.489 ANA Change Notices: Not Supported 00:22:16.489 PLE Aggregate Log Change Notices: Not Supported 00:22:16.489 LBA Status Info Alert Notices: Not Supported 00:22:16.489 EGE Aggregate Log Change Notices: Not Supported 00:22:16.489 Normal NVM Subsystem Shutdown event: Not Supported 00:22:16.489 Zone Descriptor Change Notices: Not Supported 00:22:16.489 Discovery Log Change Notices: Supported 00:22:16.489 Controller Attributes 00:22:16.489 128-bit Host Identifier: Not Supported 00:22:16.489 Non-Operational Permissive Mode: Not Supported 00:22:16.489 NVM Sets: Not Supported 00:22:16.489 Read Recovery Levels: Not Supported 00:22:16.489 Endurance Groups: Not Supported 00:22:16.489 Predictable Latency Mode: Not Supported 00:22:16.489 Traffic Based Keep ALive: Not Supported 00:22:16.489 Namespace Granularity: Not Supported 00:22:16.489 SQ Associations: Not Supported 00:22:16.489 UUID List: Not Supported 00:22:16.489 Multi-Domain Subsystem: Not Supported 00:22:16.489 Fixed Capacity Management: Not Supported 00:22:16.489 Variable Capacity Management: Not Supported 00:22:16.489 Delete Endurance Group: Not Supported 00:22:16.489 Delete NVM Set: Not Supported 00:22:16.489 Extended LBA Formats Supported: Not Supported 00:22:16.489 Flexible Data Placement Supported: Not Supported 00:22:16.489 00:22:16.489 Controller Memory Buffer Support 00:22:16.489 ================================ 00:22:16.489 Supported: No 00:22:16.489 00:22:16.489 Persistent Memory Region Support 00:22:16.489 ================================ 00:22:16.489 Supported: No 00:22:16.489 00:22:16.489 Admin Command Set Attributes 00:22:16.489 ============================ 00:22:16.489 Security Send/Receive: Not Supported 00:22:16.489 Format NVM: Not Supported 00:22:16.489 Firmware Activate/Download: Not Supported 00:22:16.489 Namespace Management: Not Supported 00:22:16.489 Device Self-Test: Not Supported 00:22:16.489 Directives: Not Supported 00:22:16.489 NVMe-MI: Not Supported 00:22:16.489 Virtualization Management: Not Supported 00:22:16.489 Doorbell Buffer Config: Not Supported 00:22:16.489 Get LBA Status Capability: Not Supported 00:22:16.489 Command & Feature Lockdown Capability: Not Supported 00:22:16.489 Abort Command Limit: 1 00:22:16.489 Async Event Request Limit: 1 00:22:16.489 Number of Firmware Slots: N/A 00:22:16.489 Firmware Slot 1 Read-Only: N/A 00:22:16.489 Firmware Activation Without Reset: N/A 00:22:16.489 Multiple Update Detection Support: N/A 00:22:16.489 Firmware Update Granularity: No Information Provided 00:22:16.489 Per-Namespace SMART Log: No 00:22:16.489 Asymmetric Namespace Access Log Page: Not Supported 00:22:16.489 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:16.489 Command Effects Log Page: Not Supported 00:22:16.489 Get Log Page Extended Data: Supported 00:22:16.489 Telemetry Log Pages: Not Supported 00:22:16.489 Persistent Event Log Pages: Not Supported 00:22:16.489 Supported Log Pages Log Page: May Support 00:22:16.489 Commands Supported & Effects Log Page: Not Supported 00:22:16.489 Feature Identifiers & Effects Log Page:May Support 00:22:16.489 NVMe-MI Commands & Effects Log Page: May Support 00:22:16.489 Data Area 4 for Telemetry Log: Not Supported 00:22:16.489 Error Log Page Entries Supported: 1 00:22:16.489 Keep Alive: Not Supported 00:22:16.489 00:22:16.489 NVM Command Set Attributes 00:22:16.489 ========================== 00:22:16.489 Submission Queue Entry Size 00:22:16.489 Max: 1 00:22:16.489 Min: 1 00:22:16.489 Completion Queue Entry Size 00:22:16.489 Max: 1 00:22:16.489 Min: 1 00:22:16.489 Number of Namespaces: 0 00:22:16.489 Compare Command: Not Supported 00:22:16.489 Write Uncorrectable Command: Not Supported 00:22:16.489 Dataset Management Command: Not Supported 00:22:16.489 Write Zeroes Command: Not Supported 00:22:16.489 Set Features Save Field: Not Supported 00:22:16.489 Reservations: Not Supported 00:22:16.489 Timestamp: Not Supported 00:22:16.489 Copy: Not Supported 00:22:16.489 Volatile Write Cache: Not Present 00:22:16.489 Atomic Write Unit (Normal): 1 00:22:16.489 Atomic Write Unit (PFail): 1 00:22:16.490 Atomic Compare & Write Unit: 1 00:22:16.490 Fused Compare & Write: Not Supported 00:22:16.490 Scatter-Gather List 00:22:16.490 SGL Command Set: Supported 00:22:16.490 SGL Keyed: Not Supported 00:22:16.490 SGL Bit Bucket Descriptor: Not Supported 00:22:16.490 SGL Metadata Pointer: Not Supported 00:22:16.490 Oversized SGL: Not Supported 00:22:16.490 SGL Metadata Address: Not Supported 00:22:16.490 SGL Offset: Supported 00:22:16.490 Transport SGL Data Block: Not Supported 00:22:16.490 Replay Protected Memory Block: Not Supported 00:22:16.490 00:22:16.490 Firmware Slot Information 00:22:16.490 ========================= 00:22:16.490 Active slot: 0 00:22:16.490 00:22:16.490 00:22:16.490 Error Log 00:22:16.490 ========= 00:22:16.490 00:22:16.490 Active Namespaces 00:22:16.490 ================= 00:22:16.490 Discovery Log Page 00:22:16.490 ================== 00:22:16.490 Generation Counter: 2 00:22:16.490 Number of Records: 2 00:22:16.490 Record Format: 0 00:22:16.490 00:22:16.490 Discovery Log Entry 0 00:22:16.490 ---------------------- 00:22:16.490 Transport Type: 3 (TCP) 00:22:16.490 Address Family: 1 (IPv4) 00:22:16.490 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:16.490 Entry Flags: 00:22:16.490 Duplicate Returned Information: 0 00:22:16.490 Explicit Persistent Connection Support for Discovery: 0 00:22:16.490 Transport Requirements: 00:22:16.490 Secure Channel: Not Specified 00:22:16.490 Port ID: 1 (0x0001) 00:22:16.490 Controller ID: 65535 (0xffff) 00:22:16.490 Admin Max SQ Size: 32 00:22:16.490 Transport Service Identifier: 4420 00:22:16.490 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:16.490 Transport Address: 10.0.0.1 00:22:16.490 Discovery Log Entry 1 00:22:16.490 ---------------------- 00:22:16.490 Transport Type: 3 (TCP) 00:22:16.490 Address Family: 1 (IPv4) 00:22:16.490 Subsystem Type: 2 (NVM Subsystem) 00:22:16.490 Entry Flags: 00:22:16.490 Duplicate Returned Information: 0 00:22:16.490 Explicit Persistent Connection Support for Discovery: 0 00:22:16.490 Transport Requirements: 00:22:16.490 Secure Channel: Not Specified 00:22:16.490 Port ID: 1 (0x0001) 00:22:16.490 Controller ID: 65535 (0xffff) 00:22:16.490 Admin Max SQ Size: 32 00:22:16.490 Transport Service Identifier: 4420 00:22:16.490 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:16.490 Transport Address: 10.0.0.1 00:22:16.490 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:16.490 get_feature(0x01) failed 00:22:16.490 get_feature(0x02) failed 00:22:16.490 get_feature(0x04) failed 00:22:16.490 ===================================================== 00:22:16.490 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:16.490 ===================================================== 00:22:16.490 Controller Capabilities/Features 00:22:16.490 ================================ 00:22:16.490 Vendor ID: 0000 00:22:16.490 Subsystem Vendor ID: 0000 00:22:16.490 Serial Number: 51ccae7025cf2d970e81 00:22:16.490 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:16.490 Firmware Version: 6.8.9-20 00:22:16.490 Recommended Arb Burst: 6 00:22:16.490 IEEE OUI Identifier: 00 00 00 00:22:16.490 Multi-path I/O 00:22:16.490 May have multiple subsystem ports: Yes 00:22:16.490 May have multiple controllers: Yes 00:22:16.490 Associated with SR-IOV VF: No 00:22:16.490 Max Data Transfer Size: Unlimited 00:22:16.490 Max Number of Namespaces: 1024 00:22:16.490 Max Number of I/O Queues: 128 00:22:16.490 NVMe Specification Version (VS): 1.3 00:22:16.490 NVMe Specification Version (Identify): 1.3 00:22:16.490 Maximum Queue Entries: 1024 00:22:16.490 Contiguous Queues Required: No 00:22:16.490 Arbitration Mechanisms Supported 00:22:16.490 Weighted Round Robin: Not Supported 00:22:16.490 Vendor Specific: Not Supported 00:22:16.490 Reset Timeout: 7500 ms 00:22:16.490 Doorbell Stride: 4 bytes 00:22:16.490 NVM Subsystem Reset: Not Supported 00:22:16.490 Command Sets Supported 00:22:16.490 NVM Command Set: Supported 00:22:16.490 Boot Partition: Not Supported 00:22:16.490 Memory Page Size Minimum: 4096 bytes 00:22:16.490 Memory Page Size Maximum: 4096 bytes 00:22:16.490 Persistent Memory Region: Not Supported 00:22:16.490 Optional Asynchronous Events Supported 00:22:16.490 Namespace Attribute Notices: Supported 00:22:16.490 Firmware Activation Notices: Not Supported 00:22:16.490 ANA Change Notices: Supported 00:22:16.490 PLE Aggregate Log Change Notices: Not Supported 00:22:16.490 LBA Status Info Alert Notices: Not Supported 00:22:16.490 EGE Aggregate Log Change Notices: Not Supported 00:22:16.490 Normal NVM Subsystem Shutdown event: Not Supported 00:22:16.490 Zone Descriptor Change Notices: Not Supported 00:22:16.490 Discovery Log Change Notices: Not Supported 00:22:16.490 Controller Attributes 00:22:16.490 128-bit Host Identifier: Supported 00:22:16.490 Non-Operational Permissive Mode: Not Supported 00:22:16.490 NVM Sets: Not Supported 00:22:16.490 Read Recovery Levels: Not Supported 00:22:16.490 Endurance Groups: Not Supported 00:22:16.490 Predictable Latency Mode: Not Supported 00:22:16.490 Traffic Based Keep ALive: Supported 00:22:16.490 Namespace Granularity: Not Supported 00:22:16.490 SQ Associations: Not Supported 00:22:16.490 UUID List: Not Supported 00:22:16.490 Multi-Domain Subsystem: Not Supported 00:22:16.490 Fixed Capacity Management: Not Supported 00:22:16.490 Variable Capacity Management: Not Supported 00:22:16.490 Delete Endurance Group: Not Supported 00:22:16.490 Delete NVM Set: Not Supported 00:22:16.490 Extended LBA Formats Supported: Not Supported 00:22:16.490 Flexible Data Placement Supported: Not Supported 00:22:16.490 00:22:16.490 Controller Memory Buffer Support 00:22:16.490 ================================ 00:22:16.490 Supported: No 00:22:16.490 00:22:16.490 Persistent Memory Region Support 00:22:16.490 ================================ 00:22:16.490 Supported: No 00:22:16.490 00:22:16.490 Admin Command Set Attributes 00:22:16.490 ============================ 00:22:16.490 Security Send/Receive: Not Supported 00:22:16.490 Format NVM: Not Supported 00:22:16.490 Firmware Activate/Download: Not Supported 00:22:16.490 Namespace Management: Not Supported 00:22:16.490 Device Self-Test: Not Supported 00:22:16.490 Directives: Not Supported 00:22:16.490 NVMe-MI: Not Supported 00:22:16.490 Virtualization Management: Not Supported 00:22:16.490 Doorbell Buffer Config: Not Supported 00:22:16.490 Get LBA Status Capability: Not Supported 00:22:16.490 Command & Feature Lockdown Capability: Not Supported 00:22:16.490 Abort Command Limit: 4 00:22:16.490 Async Event Request Limit: 4 00:22:16.490 Number of Firmware Slots: N/A 00:22:16.490 Firmware Slot 1 Read-Only: N/A 00:22:16.490 Firmware Activation Without Reset: N/A 00:22:16.490 Multiple Update Detection Support: N/A 00:22:16.490 Firmware Update Granularity: No Information Provided 00:22:16.490 Per-Namespace SMART Log: Yes 00:22:16.490 Asymmetric Namespace Access Log Page: Supported 00:22:16.490 ANA Transition Time : 10 sec 00:22:16.490 00:22:16.490 Asymmetric Namespace Access Capabilities 00:22:16.490 ANA Optimized State : Supported 00:22:16.490 ANA Non-Optimized State : Supported 00:22:16.490 ANA Inaccessible State : Supported 00:22:16.490 ANA Persistent Loss State : Supported 00:22:16.490 ANA Change State : Supported 00:22:16.490 ANAGRPID is not changed : No 00:22:16.490 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:16.490 00:22:16.490 ANA Group Identifier Maximum : 128 00:22:16.490 Number of ANA Group Identifiers : 128 00:22:16.490 Max Number of Allowed Namespaces : 1024 00:22:16.490 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:16.490 Command Effects Log Page: Supported 00:22:16.490 Get Log Page Extended Data: Supported 00:22:16.490 Telemetry Log Pages: Not Supported 00:22:16.490 Persistent Event Log Pages: Not Supported 00:22:16.490 Supported Log Pages Log Page: May Support 00:22:16.490 Commands Supported & Effects Log Page: Not Supported 00:22:16.491 Feature Identifiers & Effects Log Page:May Support 00:22:16.491 NVMe-MI Commands & Effects Log Page: May Support 00:22:16.491 Data Area 4 for Telemetry Log: Not Supported 00:22:16.491 Error Log Page Entries Supported: 128 00:22:16.491 Keep Alive: Supported 00:22:16.491 Keep Alive Granularity: 1000 ms 00:22:16.491 00:22:16.491 NVM Command Set Attributes 00:22:16.491 ========================== 00:22:16.491 Submission Queue Entry Size 00:22:16.491 Max: 64 00:22:16.491 Min: 64 00:22:16.491 Completion Queue Entry Size 00:22:16.491 Max: 16 00:22:16.491 Min: 16 00:22:16.491 Number of Namespaces: 1024 00:22:16.491 Compare Command: Not Supported 00:22:16.491 Write Uncorrectable Command: Not Supported 00:22:16.491 Dataset Management Command: Supported 00:22:16.491 Write Zeroes Command: Supported 00:22:16.491 Set Features Save Field: Not Supported 00:22:16.491 Reservations: Not Supported 00:22:16.491 Timestamp: Not Supported 00:22:16.491 Copy: Not Supported 00:22:16.491 Volatile Write Cache: Present 00:22:16.491 Atomic Write Unit (Normal): 1 00:22:16.491 Atomic Write Unit (PFail): 1 00:22:16.491 Atomic Compare & Write Unit: 1 00:22:16.491 Fused Compare & Write: Not Supported 00:22:16.491 Scatter-Gather List 00:22:16.491 SGL Command Set: Supported 00:22:16.491 SGL Keyed: Not Supported 00:22:16.491 SGL Bit Bucket Descriptor: Not Supported 00:22:16.491 SGL Metadata Pointer: Not Supported 00:22:16.491 Oversized SGL: Not Supported 00:22:16.491 SGL Metadata Address: Not Supported 00:22:16.491 SGL Offset: Supported 00:22:16.491 Transport SGL Data Block: Not Supported 00:22:16.491 Replay Protected Memory Block: Not Supported 00:22:16.491 00:22:16.491 Firmware Slot Information 00:22:16.491 ========================= 00:22:16.491 Active slot: 0 00:22:16.491 00:22:16.491 Asymmetric Namespace Access 00:22:16.491 =========================== 00:22:16.491 Change Count : 0 00:22:16.491 Number of ANA Group Descriptors : 1 00:22:16.491 ANA Group Descriptor : 0 00:22:16.491 ANA Group ID : 1 00:22:16.491 Number of NSID Values : 1 00:22:16.491 Change Count : 0 00:22:16.491 ANA State : 1 00:22:16.491 Namespace Identifier : 1 00:22:16.491 00:22:16.491 Commands Supported and Effects 00:22:16.491 ============================== 00:22:16.491 Admin Commands 00:22:16.491 -------------- 00:22:16.491 Get Log Page (02h): Supported 00:22:16.491 Identify (06h): Supported 00:22:16.491 Abort (08h): Supported 00:22:16.491 Set Features (09h): Supported 00:22:16.491 Get Features (0Ah): Supported 00:22:16.491 Asynchronous Event Request (0Ch): Supported 00:22:16.491 Keep Alive (18h): Supported 00:22:16.491 I/O Commands 00:22:16.491 ------------ 00:22:16.491 Flush (00h): Supported 00:22:16.491 Write (01h): Supported LBA-Change 00:22:16.491 Read (02h): Supported 00:22:16.491 Write Zeroes (08h): Supported LBA-Change 00:22:16.491 Dataset Management (09h): Supported 00:22:16.491 00:22:16.491 Error Log 00:22:16.491 ========= 00:22:16.491 Entry: 0 00:22:16.491 Error Count: 0x3 00:22:16.491 Submission Queue Id: 0x0 00:22:16.491 Command Id: 0x5 00:22:16.491 Phase Bit: 0 00:22:16.491 Status Code: 0x2 00:22:16.491 Status Code Type: 0x0 00:22:16.491 Do Not Retry: 1 00:22:16.491 Error Location: 0x28 00:22:16.491 LBA: 0x0 00:22:16.491 Namespace: 0x0 00:22:16.491 Vendor Log Page: 0x0 00:22:16.491 ----------- 00:22:16.491 Entry: 1 00:22:16.491 Error Count: 0x2 00:22:16.491 Submission Queue Id: 0x0 00:22:16.491 Command Id: 0x5 00:22:16.491 Phase Bit: 0 00:22:16.491 Status Code: 0x2 00:22:16.491 Status Code Type: 0x0 00:22:16.491 Do Not Retry: 1 00:22:16.491 Error Location: 0x28 00:22:16.491 LBA: 0x0 00:22:16.491 Namespace: 0x0 00:22:16.491 Vendor Log Page: 0x0 00:22:16.491 ----------- 00:22:16.491 Entry: 2 00:22:16.491 Error Count: 0x1 00:22:16.491 Submission Queue Id: 0x0 00:22:16.491 Command Id: 0x4 00:22:16.491 Phase Bit: 0 00:22:16.491 Status Code: 0x2 00:22:16.491 Status Code Type: 0x0 00:22:16.491 Do Not Retry: 1 00:22:16.491 Error Location: 0x28 00:22:16.491 LBA: 0x0 00:22:16.491 Namespace: 0x0 00:22:16.491 Vendor Log Page: 0x0 00:22:16.491 00:22:16.491 Number of Queues 00:22:16.491 ================ 00:22:16.491 Number of I/O Submission Queues: 128 00:22:16.491 Number of I/O Completion Queues: 128 00:22:16.491 00:22:16.491 ZNS Specific Controller Data 00:22:16.491 ============================ 00:22:16.491 Zone Append Size Limit: 0 00:22:16.491 00:22:16.491 00:22:16.491 Active Namespaces 00:22:16.491 ================= 00:22:16.491 get_feature(0x05) failed 00:22:16.491 Namespace ID:1 00:22:16.491 Command Set Identifier: NVM (00h) 00:22:16.491 Deallocate: Supported 00:22:16.491 Deallocated/Unwritten Error: Not Supported 00:22:16.491 Deallocated Read Value: Unknown 00:22:16.491 Deallocate in Write Zeroes: Not Supported 00:22:16.491 Deallocated Guard Field: 0xFFFF 00:22:16.491 Flush: Supported 00:22:16.491 Reservation: Not Supported 00:22:16.491 Namespace Sharing Capabilities: Multiple Controllers 00:22:16.491 Size (in LBAs): 1310720 (5GiB) 00:22:16.491 Capacity (in LBAs): 1310720 (5GiB) 00:22:16.491 Utilization (in LBAs): 1310720 (5GiB) 00:22:16.491 UUID: 6366851a-387f-4414-adb1-ce50254490c2 00:22:16.491 Thin Provisioning: Not Supported 00:22:16.491 Per-NS Atomic Units: Yes 00:22:16.491 Atomic Boundary Size (Normal): 0 00:22:16.491 Atomic Boundary Size (PFail): 0 00:22:16.491 Atomic Boundary Offset: 0 00:22:16.491 NGUID/EUI64 Never Reused: No 00:22:16.491 ANA group ID: 1 00:22:16.491 Namespace Write Protected: No 00:22:16.491 Number of LBA Formats: 1 00:22:16.491 Current LBA Format: LBA Format #00 00:22:16.491 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:22:16.491 00:22:16.491 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:16.491 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.491 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.750 rmmod nvme_tcp 00:22:16.750 rmmod nvme_fabrics 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.750 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:16.751 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:17.010 13:51:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:17.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:17.956 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:17.956 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:18.242 00:22:18.242 real 0m3.978s 00:22:18.242 user 0m1.314s 00:22:18.242 sys 0m2.043s 00:22:18.242 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:18.242 13:51:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.242 ************************************ 00:22:18.242 END TEST nvmf_identify_kernel_target 00:22:18.242 ************************************ 00:22:18.242 13:51:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:18.242 13:51:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:18.242 13:51:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:18.242 13:51:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.242 ************************************ 00:22:18.242 START TEST nvmf_auth_host 00:22:18.242 ************************************ 00:22:18.242 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:18.242 * Looking for test storage... 00:22:18.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:18.242 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:18.242 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:18.242 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.502 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:18.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.503 --rc genhtml_branch_coverage=1 00:22:18.503 --rc genhtml_function_coverage=1 00:22:18.503 --rc genhtml_legend=1 00:22:18.503 --rc geninfo_all_blocks=1 00:22:18.503 --rc geninfo_unexecuted_blocks=1 00:22:18.503 00:22:18.503 ' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:18.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.503 --rc genhtml_branch_coverage=1 00:22:18.503 --rc genhtml_function_coverage=1 00:22:18.503 --rc genhtml_legend=1 00:22:18.503 --rc geninfo_all_blocks=1 00:22:18.503 --rc geninfo_unexecuted_blocks=1 00:22:18.503 00:22:18.503 ' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:18.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.503 --rc genhtml_branch_coverage=1 00:22:18.503 --rc genhtml_function_coverage=1 00:22:18.503 --rc genhtml_legend=1 00:22:18.503 --rc geninfo_all_blocks=1 00:22:18.503 --rc geninfo_unexecuted_blocks=1 00:22:18.503 00:22:18.503 ' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:18.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.503 --rc genhtml_branch_coverage=1 00:22:18.503 --rc genhtml_function_coverage=1 00:22:18.503 --rc genhtml_legend=1 00:22:18.503 --rc geninfo_all_blocks=1 00:22:18.503 --rc geninfo_unexecuted_blocks=1 00:22:18.503 00:22:18.503 ' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.503 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:18.503 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:18.504 Cannot find device "nvmf_init_br" 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:18.504 Cannot find device "nvmf_init_br2" 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:18.504 Cannot find device "nvmf_tgt_br" 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:18.504 Cannot find device "nvmf_tgt_br2" 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:18.504 Cannot find device "nvmf_init_br" 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:18.504 Cannot find device "nvmf_init_br2" 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:18.504 Cannot find device "nvmf_tgt_br" 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:18.504 Cannot find device "nvmf_tgt_br2" 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:22:18.504 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:18.763 Cannot find device "nvmf_br" 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:18.763 Cannot find device "nvmf_init_if" 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:18.763 Cannot find device "nvmf_init_if2" 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:18.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:18.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:18.763 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:19.022 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:19.022 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:19.022 00:22:19.022 --- 10.0.0.3 ping statistics --- 00:22:19.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.022 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:19.022 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:19.022 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:22:19.022 00:22:19.022 --- 10.0.0.4 ping statistics --- 00:22:19.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.022 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:19.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:22:19.022 00:22:19.022 --- 10.0.0.1 ping statistics --- 00:22:19.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.022 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:19.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:22:19.022 00:22:19.022 --- 10.0.0.2 ping statistics --- 00:22:19.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.022 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92850 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92850 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 92850 ']' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:19.022 13:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8d98a12b4947cf343a5bf603b0f09ecb 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KwL 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8d98a12b4947cf343a5bf603b0f09ecb 0 00:22:19.957 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8d98a12b4947cf343a5bf603b0f09ecb 0 00:22:19.958 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:19.958 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:19.958 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8d98a12b4947cf343a5bf603b0f09ecb 00:22:19.958 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:19.958 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KwL 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KwL 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.KwL 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=62e1390489614429b8c3b7b4311bea92511d95d5133d31e9ffb1def01ce628f0 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.96h 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 62e1390489614429b8c3b7b4311bea92511d95d5133d31e9ffb1def01ce628f0 3 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 62e1390489614429b8c3b7b4311bea92511d95d5133d31e9ffb1def01ce628f0 3 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=62e1390489614429b8c3b7b4311bea92511d95d5133d31e9ffb1def01ce628f0 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:22:20.218 13:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.96h 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.96h 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.96h 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ecd2de4e003be4b39d3ae45a03bae0fb947e6cebe412f231 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pRp 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ecd2de4e003be4b39d3ae45a03bae0fb947e6cebe412f231 0 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ecd2de4e003be4b39d3ae45a03bae0fb947e6cebe412f231 0 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ecd2de4e003be4b39d3ae45a03bae0fb947e6cebe412f231 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pRp 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pRp 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.pRp 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=91d44d328d624529e5535ea78afbeef2ba05c807eccf464d 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.sxu 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 91d44d328d624529e5535ea78afbeef2ba05c807eccf464d 2 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 91d44d328d624529e5535ea78afbeef2ba05c807eccf464d 2 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=91d44d328d624529e5535ea78afbeef2ba05c807eccf464d 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.218 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.sxu 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.sxu 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sxu 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=52ed2b09d029479ab606fa7a4f661a72 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UFN 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 52ed2b09d029479ab606fa7a4f661a72 1 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 52ed2b09d029479ab606fa7a4f661a72 1 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=52ed2b09d029479ab606fa7a4f661a72 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:22:20.478 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UFN 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UFN 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.UFN 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=985aff5d98c5567369fc198d77fec8bf 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xI6 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 985aff5d98c5567369fc198d77fec8bf 1 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 985aff5d98c5567369fc198d77fec8bf 1 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=985aff5d98c5567369fc198d77fec8bf 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xI6 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xI6 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.xI6 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5b5dd89be01988f6127585d214f20773ca1a45ccab194e0d 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.K0r 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5b5dd89be01988f6127585d214f20773ca1a45ccab194e0d 2 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5b5dd89be01988f6127585d214f20773ca1a45ccab194e0d 2 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5b5dd89be01988f6127585d214f20773ca1a45ccab194e0d 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.K0r 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.K0r 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.K0r 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=90f4387d20d985ad32b04e935c50ac8a 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eg5 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 90f4387d20d985ad32b04e935c50ac8a 0 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 90f4387d20d985ad32b04e935c50ac8a 0 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=90f4387d20d985ad32b04e935c50ac8a 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:22:20.479 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eg5 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eg5 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.eg5 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:20.738 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5314cd652fdef8f22f2bb59e8d0b2a416edbe0c13e6af994eebf78f91ebcb0a1 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gqc 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5314cd652fdef8f22f2bb59e8d0b2a416edbe0c13e6af994eebf78f91ebcb0a1 3 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5314cd652fdef8f22f2bb59e8d0b2a416edbe0c13e6af994eebf78f91ebcb0a1 3 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5314cd652fdef8f22f2bb59e8d0b2a416edbe0c13e6af994eebf78f91ebcb0a1 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gqc 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gqc 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gqc 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92850 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 92850 ']' 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:20.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:20.739 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KwL 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.96h ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.96h 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pRp 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sxu ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sxu 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.UFN 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.xI6 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xI6 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.K0r 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.eg5 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.eg5 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gqc 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:22:20.998 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:20.999 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:20.999 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:20.999 13:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:21.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:21.567 Waiting for block devices as requested 00:22:21.567 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:21.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:22.764 No valid GPT data, bailing 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:22.764 No valid GPT data, bailing 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:22.764 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:22.765 No valid GPT data, bailing 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:22.765 No valid GPT data, bailing 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:22.765 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:23.024 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -a 10.0.0.1 -t tcp -s 4420 00:22:23.024 00:22:23.024 Discovery Log Number of Records 2, Generation counter 2 00:22:23.024 =====Discovery Log Entry 0====== 00:22:23.024 trtype: tcp 00:22:23.024 adrfam: ipv4 00:22:23.024 subtype: current discovery subsystem 00:22:23.024 treq: not specified, sq flow control disable supported 00:22:23.024 portid: 1 00:22:23.024 trsvcid: 4420 00:22:23.024 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:23.024 traddr: 10.0.0.1 00:22:23.024 eflags: none 00:22:23.024 sectype: none 00:22:23.024 =====Discovery Log Entry 1====== 00:22:23.024 trtype: tcp 00:22:23.024 adrfam: ipv4 00:22:23.024 subtype: nvme subsystem 00:22:23.024 treq: not specified, sq flow control disable supported 00:22:23.024 portid: 1 00:22:23.024 trsvcid: 4420 00:22:23.024 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:23.024 traddr: 10.0.0.1 00:22:23.024 eflags: none 00:22:23.024 sectype: none 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.025 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.284 nvme0n1 00:22:23.285 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.285 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.285 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.285 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.285 13:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 nvme0n1 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.285 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.545 nvme0n1 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.545 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.805 nvme0n1 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.805 nvme0n1 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.805 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.073 nvme0n1 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.073 13:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.361 nvme0n1 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.361 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.621 nvme0n1 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:24.621 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.622 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.622 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.622 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.881 nvme0n1 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:24.881 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:24.882 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:24.882 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.882 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.141 nvme0n1 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.141 13:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.141 nvme0n1 00:22:25.141 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.141 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.141 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.141 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.141 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.401 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.660 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.920 nvme0n1 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.920 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.180 nvme0n1 00:22:26.180 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.180 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.180 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.180 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.180 13:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.180 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.441 nvme0n1 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.441 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.701 nvme0n1 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.701 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.960 nvme0n1 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.960 13:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.341 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.602 nvme0n1 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.602 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.865 nvme0n1 00:22:28.865 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.865 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.865 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.865 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.865 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.865 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.128 13:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.388 nvme0n1 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.388 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.647 nvme0n1 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.647 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.907 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.166 nvme0n1 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.166 13:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.735 nvme0n1 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.735 13:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.304 nvme0n1 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.304 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.305 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.873 nvme0n1 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:31.873 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.874 13:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.442 nvme0n1 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.442 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.011 nvme0n1 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:33.011 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.012 nvme0n1 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.012 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.271 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.271 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.271 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:33.271 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.271 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.271 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.272 13:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.272 nvme0n1 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.272 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 nvme0n1 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 nvme0n1 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.532 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.792 nvme0n1 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.792 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.793 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.052 nvme0n1 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.052 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.053 nvme0n1 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.053 13:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.312 nvme0n1 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.312 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.313 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.571 nvme0n1 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.571 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.572 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.831 nvme0n1 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.831 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.091 nvme0n1 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.091 13:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.351 nvme0n1 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.351 nvme0n1 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.351 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:35.610 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.611 nvme0n1 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.611 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.870 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.871 nvme0n1 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.871 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:36.130 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:36.131 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.131 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.131 13:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.390 nvme0n1 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.390 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.650 nvme0n1 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.650 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:36.952 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:36.953 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.953 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.953 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.212 nvme0n1 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.212 13:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.471 nvme0n1 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.471 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:37.472 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:37.472 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:37.472 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:37.472 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.472 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.731 nvme0n1 00:22:37.731 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.731 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.731 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.731 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.731 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.731 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.990 13:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.558 nvme0n1 00:22:38.558 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.558 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.558 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.558 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.558 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.558 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.558 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.558 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.559 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.129 nvme0n1 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.129 13:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.697 nvme0n1 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:39.697 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:39.698 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:39.698 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.698 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.266 nvme0n1 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.266 13:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.836 nvme0n1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.836 nvme0n1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.836 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.097 nvme0n1 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.097 nvme0n1 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.097 13:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.097 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.097 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.097 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.097 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.097 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 nvme0n1 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.357 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.358 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.358 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.358 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.358 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.358 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:41.358 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.358 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.617 nvme0n1 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.617 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.618 nvme0n1 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.618 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.878 nvme0n1 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.878 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.879 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.138 nvme0n1 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.138 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.139 13:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.398 nvme0n1 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.398 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.399 nvme0n1 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.399 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.658 nvme0n1 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.658 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.918 nvme0n1 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.918 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.178 13:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.178 nvme0n1 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.178 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.438 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.439 nvme0n1 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.439 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.699 nvme0n1 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.699 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.959 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.218 nvme0n1 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.218 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.219 13:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.219 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.478 nvme0n1 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.478 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.479 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.738 nvme0n1 00:22:44.738 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.738 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.738 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.738 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.738 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.997 13:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.257 nvme0n1 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.257 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.516 nvme0n1 00:22:45.516 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.516 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.516 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.516 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.517 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.517 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.517 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.517 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.517 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.517 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGQ5OGExMmI0OTQ3Y2YzNDNhNWJmNjAzYjBmMDllY2KahyaU: 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: ]] 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlMTM5MDQ4OTYxNDQyOWI4YzNiN2I0MzExYmVhOTI1MTFkOTVkNTEzM2QzMWU5ZmZiMWRlZjAxY2U2MjhmMGWHMF8=: 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.776 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.035 nvme0n1 00:22:46.035 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.035 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.035 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.035 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.035 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.035 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.295 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.295 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.295 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.295 13:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.295 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.863 nvme0n1 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.864 13:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.122 nvme0n1 00:22:47.122 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.122 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWI1ZGQ4OWJlMDE5ODhmNjEyNzU4NWQyMTRmMjA3NzNjYTFhNDVjY2FiMTk0ZTBkML5f3g==: 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: ]] 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBmNDM4N2QyMGQ5ODVhZDMyYjA0ZTkzNWM1MGFjOGEpk67y: 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.382 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.950 nvme0n1 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTMxNGNkNjUyZmRlZjhmMjJmMmJiNTllOGQwYjJhNDE2ZWRiZTBjMTNlNmFmOTk0ZWViZjc4ZjkxZWJjYjBhMdrYSis=: 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.951 13:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.519 nvme0n1 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.519 2024/11/06 13:52:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:48.519 request: 00:22:48.519 { 00:22:48.519 "method": "bdev_nvme_attach_controller", 00:22:48.519 "params": { 00:22:48.519 "name": "nvme0", 00:22:48.519 "trtype": "tcp", 00:22:48.519 "traddr": "10.0.0.1", 00:22:48.519 "adrfam": "ipv4", 00:22:48.519 "trsvcid": "4420", 00:22:48.519 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:48.519 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:48.519 "prchk_reftag": false, 00:22:48.519 "prchk_guard": false, 00:22:48.519 "hdgst": false, 00:22:48.519 "ddgst": false, 00:22:48.519 "allow_unrecognized_csi": false 00:22:48.519 } 00:22:48.519 } 00:22:48.519 Got JSON-RPC error response 00:22:48.519 GoRPCClient: error on JSON-RPC call 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.519 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.520 2024/11/06 13:52:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:48.520 request: 00:22:48.520 { 00:22:48.520 "method": "bdev_nvme_attach_controller", 00:22:48.520 "params": { 00:22:48.520 "name": "nvme0", 00:22:48.520 "trtype": "tcp", 00:22:48.520 "traddr": "10.0.0.1", 00:22:48.520 "adrfam": "ipv4", 00:22:48.520 "trsvcid": "4420", 00:22:48.520 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:48.520 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:48.520 "prchk_reftag": false, 00:22:48.520 "prchk_guard": false, 00:22:48.520 "hdgst": false, 00:22:48.520 "ddgst": false, 00:22:48.520 "dhchap_key": "key2", 00:22:48.520 "allow_unrecognized_csi": false 00:22:48.520 } 00:22:48.520 } 00:22:48.520 Got JSON-RPC error response 00:22:48.520 GoRPCClient: error on JSON-RPC call 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.520 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.779 2024/11/06 13:52:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:48.779 request: 00:22:48.779 { 00:22:48.779 "method": "bdev_nvme_attach_controller", 00:22:48.779 "params": { 00:22:48.779 "name": "nvme0", 00:22:48.779 "trtype": "tcp", 00:22:48.779 "traddr": "10.0.0.1", 00:22:48.779 "adrfam": "ipv4", 00:22:48.779 "trsvcid": "4420", 00:22:48.779 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:48.779 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:48.779 "prchk_reftag": false, 00:22:48.779 "prchk_guard": false, 00:22:48.779 "hdgst": false, 00:22:48.779 "ddgst": false, 00:22:48.779 "dhchap_key": "key1", 00:22:48.779 "dhchap_ctrlr_key": "ckey2", 00:22:48.779 "allow_unrecognized_csi": false 00:22:48.779 } 00:22:48.779 } 00:22:48.779 Got JSON-RPC error response 00:22:48.779 GoRPCClient: error on JSON-RPC call 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.779 nvme0n1 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.779 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.780 2024/11/06 13:52:29 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:22:48.780 request: 00:22:48.780 { 00:22:48.780 "method": "bdev_nvme_set_keys", 00:22:48.780 "params": { 00:22:48.780 "name": "nvme0", 00:22:48.780 "dhchap_key": "key1", 00:22:48.780 "dhchap_ctrlr_key": "ckey2" 00:22:48.780 } 00:22:48.780 } 00:22:48.780 Got JSON-RPC error response 00:22:48.780 GoRPCClient: error on JSON-RPC call 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.780 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.038 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.038 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:49.038 13:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNkMmRlNGUwMDNiZTRiMzlkM2FlNDVhMDNiYWUwZmI5NDdlNmNlYmU0MTJmMjMxqnBIcg==: 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: ]] 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFkNDRkMzI4ZDYyNDUyOWU1NTM1ZWE3OGFmYmVlZjJiYTA1YzgwN2VjY2Y0NjRkPZI4+w==: 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.977 nvme0n1 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTJlZDJiMDlkMDI5NDc5YWI2MDZmYTdhNGY2NjFhNzL2tYbI: 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: ]] 00:22:49.977 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTg1YWZmNWQ5OGM1NTY3MzY5ZmMxOThkNzdmZWM4YmbqEZBX: 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.978 2024/11/06 13:52:30 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:22:49.978 request: 00:22:49.978 { 00:22:49.978 "method": "bdev_nvme_set_keys", 00:22:49.978 "params": { 00:22:49.978 "name": "nvme0", 00:22:49.978 "dhchap_key": "key2", 00:22:49.978 "dhchap_ctrlr_key": "ckey1" 00:22:49.978 } 00:22:49.978 } 00:22:49.978 Got JSON-RPC error response 00:22:49.978 GoRPCClient: error on JSON-RPC call 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.978 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:50.277 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.277 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.277 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:50.277 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.277 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.277 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:50.277 13:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:51.214 13:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.214 13:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:51.214 13:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.214 13:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.214 13:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.214 rmmod nvme_tcp 00:22:51.214 rmmod nvme_fabrics 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92850 ']' 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92850 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 92850 ']' 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 92850 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92850 00:22:51.214 killing process with pid 92850 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92850' 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 92850 00:22:51.214 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 92850 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:51.473 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:51.732 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:51.732 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:22:51.733 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:51.992 13:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:52.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:52.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:52.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:52.930 13:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.KwL /tmp/spdk.key-null.pRp /tmp/spdk.key-sha256.UFN /tmp/spdk.key-sha384.K0r /tmp/spdk.key-sha512.gqc /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:52.930 13:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:53.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:53.498 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:53.498 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:53.757 00:22:53.757 real 0m35.438s 00:22:53.757 user 0m32.700s 00:22:53.757 sys 0m5.519s 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.757 ************************************ 00:22:53.757 END TEST nvmf_auth_host 00:22:53.757 ************************************ 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.757 ************************************ 00:22:53.757 START TEST nvmf_digest 00:22:53.757 ************************************ 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:53.757 * Looking for test storage... 00:22:53.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:22:53.757 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.017 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:54.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.018 --rc genhtml_branch_coverage=1 00:22:54.018 --rc genhtml_function_coverage=1 00:22:54.018 --rc genhtml_legend=1 00:22:54.018 --rc geninfo_all_blocks=1 00:22:54.018 --rc geninfo_unexecuted_blocks=1 00:22:54.018 00:22:54.018 ' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:54.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.018 --rc genhtml_branch_coverage=1 00:22:54.018 --rc genhtml_function_coverage=1 00:22:54.018 --rc genhtml_legend=1 00:22:54.018 --rc geninfo_all_blocks=1 00:22:54.018 --rc geninfo_unexecuted_blocks=1 00:22:54.018 00:22:54.018 ' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:54.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.018 --rc genhtml_branch_coverage=1 00:22:54.018 --rc genhtml_function_coverage=1 00:22:54.018 --rc genhtml_legend=1 00:22:54.018 --rc geninfo_all_blocks=1 00:22:54.018 --rc geninfo_unexecuted_blocks=1 00:22:54.018 00:22:54.018 ' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:54.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.018 --rc genhtml_branch_coverage=1 00:22:54.018 --rc genhtml_function_coverage=1 00:22:54.018 --rc genhtml_legend=1 00:22:54.018 --rc geninfo_all_blocks=1 00:22:54.018 --rc geninfo_unexecuted_blocks=1 00:22:54.018 00:22:54.018 ' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.018 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.018 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:54.019 Cannot find device "nvmf_init_br" 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:54.019 Cannot find device "nvmf_init_br2" 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:54.019 Cannot find device "nvmf_tgt_br" 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.019 Cannot find device "nvmf_tgt_br2" 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:54.019 Cannot find device "nvmf_init_br" 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:54.019 Cannot find device "nvmf_init_br2" 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:54.019 Cannot find device "nvmf_tgt_br" 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:54.019 Cannot find device "nvmf_tgt_br2" 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:22:54.019 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:54.279 Cannot find device "nvmf_br" 00:22:54.279 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:22:54.279 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:54.279 Cannot find device "nvmf_init_if" 00:22:54.279 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:22:54.279 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:54.279 Cannot find device "nvmf_init_if2" 00:22:54.279 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:22:54.279 13:52:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.279 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:54.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:22:54.539 00:22:54.539 --- 10.0.0.3 ping statistics --- 00:22:54.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.539 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:54.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:54.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:22:54.539 00:22:54.539 --- 10.0.0.4 ping statistics --- 00:22:54.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.539 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:22:54.539 00:22:54.539 --- 10.0.0.1 ping statistics --- 00:22:54.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.539 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:54.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:22:54.539 00:22:54.539 --- 10.0.0.2 ping statistics --- 00:22:54.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.539 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:54.539 ************************************ 00:22:54.539 START TEST nvmf_digest_clean 00:22:54.539 ************************************ 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:54.539 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94509 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94509 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94509 ']' 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:54.540 13:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:54.540 [2024-11-06 13:52:35.382071] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:22:54.540 [2024-11-06 13:52:35.382138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.798 [2024-11-06 13:52:35.533509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.798 [2024-11-06 13:52:35.599987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.798 [2024-11-06 13:52:35.600033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.798 [2024-11-06 13:52:35.600043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.798 [2024-11-06 13:52:35.600051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.798 [2024-11-06 13:52:35.600058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.798 [2024-11-06 13:52:35.600441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.365 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:55.365 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:55.365 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.365 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:55.365 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:55.624 null0 00:22:55.624 [2024-11-06 13:52:36.472926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.624 [2024-11-06 13:52:36.496995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94559 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94559 /var/tmp/bperf.sock 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94559 ']' 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:55.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:55.624 13:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:55.883 [2024-11-06 13:52:36.557331] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:22:55.883 [2024-11-06 13:52:36.557740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94559 ] 00:22:55.883 [2024-11-06 13:52:36.709600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.883 [2024-11-06 13:52:36.776855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.820 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:56.820 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:22:56.820 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:56.821 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:56.821 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:57.082 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.082 13:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.340 nvme0n1 00:22:57.340 13:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:57.340 13:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.340 Running I/O for 2 seconds... 00:22:59.655 24719.00 IOPS, 96.56 MiB/s [2024-11-06T13:52:40.588Z] 24769.00 IOPS, 96.75 MiB/s 00:22:59.655 Latency(us) 00:22:59.655 [2024-11-06T13:52:40.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.655 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:59.655 nvme0n1 : 2.00 24795.01 96.86 0.00 0.00 5157.32 2250.33 12896.64 00:22:59.655 [2024-11-06T13:52:40.588Z] =================================================================================================================== 00:22:59.655 [2024-11-06T13:52:40.588Z] Total : 24795.01 96.86 0.00 0.00 5157.32 2250.33 12896.64 00:22:59.655 { 00:22:59.655 "results": [ 00:22:59.655 { 00:22:59.655 "job": "nvme0n1", 00:22:59.655 "core_mask": "0x2", 00:22:59.655 "workload": "randread", 00:22:59.655 "status": "finished", 00:22:59.655 "queue_depth": 128, 00:22:59.655 "io_size": 4096, 00:22:59.655 "runtime": 2.00375, 00:22:59.655 "iops": 24795.00935745477, 00:22:59.655 "mibps": 96.8555053025577, 00:22:59.655 "io_failed": 0, 00:22:59.655 "io_timeout": 0, 00:22:59.655 "avg_latency_us": 5157.323906692931, 00:22:59.655 "min_latency_us": 2250.332530120482, 00:22:59.655 "max_latency_us": 12896.642570281125 00:22:59.655 } 00:22:59.655 ], 00:22:59.655 "core_count": 1 00:22:59.655 } 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:59.655 | select(.opcode=="crc32c") 00:22:59.655 | "\(.module_name) \(.executed)"' 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94559 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94559 ']' 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94559 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94559 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:59.655 killing process with pid 94559 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94559' 00:22:59.655 Received shutdown signal, test time was about 2.000000 seconds 00:22:59.655 00:22:59.655 Latency(us) 00:22:59.655 [2024-11-06T13:52:40.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.655 [2024-11-06T13:52:40.588Z] =================================================================================================================== 00:22:59.655 [2024-11-06T13:52:40.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94559 00:22:59.655 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94559 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94655 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94655 /var/tmp/bperf.sock 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94655 ']' 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:59.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:59.915 13:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:59.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:59.915 Zero copy mechanism will not be used. 00:22:59.915 [2024-11-06 13:52:40.829913] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:22:59.915 [2024-11-06 13:52:40.830005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94655 ] 00:23:00.174 [2024-11-06 13:52:40.969919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.174 [2024-11-06 13:52:41.037821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.109 13:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:01.109 13:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:23:01.109 13:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:01.109 13:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:01.109 13:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:01.368 13:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.368 13:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.626 nvme0n1 00:23:01.626 13:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:01.626 13:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:01.626 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:01.626 Zero copy mechanism will not be used. 00:23:01.626 Running I/O for 2 seconds... 00:23:03.941 9325.00 IOPS, 1165.62 MiB/s [2024-11-06T13:52:44.874Z] 9516.50 IOPS, 1189.56 MiB/s 00:23:03.941 Latency(us) 00:23:03.941 [2024-11-06T13:52:44.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.941 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:03.941 nvme0n1 : 2.00 9513.62 1189.20 0.00 0.00 1679.20 483.62 7264.23 00:23:03.941 [2024-11-06T13:52:44.874Z] =================================================================================================================== 00:23:03.941 [2024-11-06T13:52:44.874Z] Total : 9513.62 1189.20 0.00 0.00 1679.20 483.62 7264.23 00:23:03.941 { 00:23:03.941 "results": [ 00:23:03.941 { 00:23:03.941 "job": "nvme0n1", 00:23:03.941 "core_mask": "0x2", 00:23:03.941 "workload": "randread", 00:23:03.941 "status": "finished", 00:23:03.941 "queue_depth": 16, 00:23:03.941 "io_size": 131072, 00:23:03.941 "runtime": 2.002288, 00:23:03.941 "iops": 9513.616422812303, 00:23:03.941 "mibps": 1189.202052851538, 00:23:03.941 "io_failed": 0, 00:23:03.941 "io_timeout": 0, 00:23:03.941 "avg_latency_us": 1679.2022204414277, 00:23:03.941 "min_latency_us": 483.62409638554215, 00:23:03.941 "max_latency_us": 7264.231325301204 00:23:03.941 } 00:23:03.941 ], 00:23:03.941 "core_count": 1 00:23:03.941 } 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:03.941 | select(.opcode=="crc32c") 00:23:03.941 | "\(.module_name) \(.executed)"' 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94655 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94655 ']' 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94655 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94655 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94655' 00:23:03.941 killing process with pid 94655 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94655 00:23:03.941 Received shutdown signal, test time was about 2.000000 seconds 00:23:03.941 00:23:03.941 Latency(us) 00:23:03.941 [2024-11-06T13:52:44.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.941 [2024-11-06T13:52:44.874Z] =================================================================================================================== 00:23:03.941 [2024-11-06T13:52:44.874Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.941 13:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94655 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94745 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94745 /var/tmp/bperf.sock 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94745 ']' 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:04.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:04.201 13:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:04.460 [2024-11-06 13:52:45.143526] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:04.460 [2024-11-06 13:52:45.143608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94745 ] 00:23:04.460 [2024-11-06 13:52:45.280436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.460 [2024-11-06 13:52:45.346125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.396 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:05.396 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:23:05.396 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:05.396 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:05.396 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:05.655 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.655 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.915 nvme0n1 00:23:05.915 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:05.915 13:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:05.915 Running I/O for 2 seconds... 00:23:08.230 29207.00 IOPS, 114.09 MiB/s [2024-11-06T13:52:49.163Z] 29303.00 IOPS, 114.46 MiB/s 00:23:08.230 Latency(us) 00:23:08.230 [2024-11-06T13:52:49.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.230 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:08.230 nvme0n1 : 2.01 29285.16 114.40 0.00 0.00 4366.80 2224.01 14212.63 00:23:08.230 [2024-11-06T13:52:49.163Z] =================================================================================================================== 00:23:08.230 [2024-11-06T13:52:49.163Z] Total : 29285.16 114.40 0.00 0.00 4366.80 2224.01 14212.63 00:23:08.230 { 00:23:08.230 "results": [ 00:23:08.230 { 00:23:08.230 "job": "nvme0n1", 00:23:08.230 "core_mask": "0x2", 00:23:08.230 "workload": "randwrite", 00:23:08.230 "status": "finished", 00:23:08.230 "queue_depth": 128, 00:23:08.230 "io_size": 4096, 00:23:08.230 "runtime": 2.00675, 00:23:08.230 "iops": 29285.16257630497, 00:23:08.230 "mibps": 114.39516631369129, 00:23:08.230 "io_failed": 0, 00:23:08.230 "io_timeout": 0, 00:23:08.230 "avg_latency_us": 4366.799612935817, 00:23:08.230 "min_latency_us": 2224.0128514056223, 00:23:08.230 "max_latency_us": 14212.626506024097 00:23:08.230 } 00:23:08.230 ], 00:23:08.230 "core_count": 1 00:23:08.230 } 00:23:08.230 13:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:08.230 13:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:08.230 13:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:08.230 13:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:08.230 13:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:08.230 | select(.opcode=="crc32c") 00:23:08.230 | "\(.module_name) \(.executed)"' 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94745 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94745 ']' 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94745 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94745 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:08.230 killing process with pid 94745 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94745' 00:23:08.230 Received shutdown signal, test time was about 2.000000 seconds 00:23:08.230 00:23:08.230 Latency(us) 00:23:08.230 [2024-11-06T13:52:49.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.230 [2024-11-06T13:52:49.163Z] =================================================================================================================== 00:23:08.230 [2024-11-06T13:52:49.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94745 00:23:08.230 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94745 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94831 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94831 /var/tmp/bperf.sock 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94831 ']' 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:08.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:08.489 13:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:08.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:08.748 Zero copy mechanism will not be used. 00:23:08.748 [2024-11-06 13:52:49.428525] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:08.748 [2024-11-06 13:52:49.428605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94831 ] 00:23:08.748 [2024-11-06 13:52:49.562948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.748 [2024-11-06 13:52:49.631412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.684 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.685 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:23:09.685 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:09.685 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:09.685 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:09.944 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:09.944 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.202 nvme0n1 00:23:10.202 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:10.202 13:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:10.202 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:10.202 Zero copy mechanism will not be used. 00:23:10.202 Running I/O for 2 seconds... 00:23:12.515 7939.00 IOPS, 992.38 MiB/s [2024-11-06T13:52:53.448Z] 8000.00 IOPS, 1000.00 MiB/s 00:23:12.515 Latency(us) 00:23:12.515 [2024-11-06T13:52:53.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.515 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:12.515 nvme0n1 : 2.00 7997.70 999.71 0.00 0.00 1997.09 1283.08 11317.46 00:23:12.515 [2024-11-06T13:52:53.448Z] =================================================================================================================== 00:23:12.515 [2024-11-06T13:52:53.448Z] Total : 7997.70 999.71 0.00 0.00 1997.09 1283.08 11317.46 00:23:12.515 { 00:23:12.515 "results": [ 00:23:12.515 { 00:23:12.515 "job": "nvme0n1", 00:23:12.515 "core_mask": "0x2", 00:23:12.515 "workload": "randwrite", 00:23:12.515 "status": "finished", 00:23:12.515 "queue_depth": 16, 00:23:12.515 "io_size": 131072, 00:23:12.515 "runtime": 2.003077, 00:23:12.515 "iops": 7997.695545403397, 00:23:12.515 "mibps": 999.7119431754246, 00:23:12.515 "io_failed": 0, 00:23:12.515 "io_timeout": 0, 00:23:12.515 "avg_latency_us": 1997.0885902661835, 00:23:12.515 "min_latency_us": 1283.0843373493976, 00:23:12.515 "max_latency_us": 11317.461847389559 00:23:12.515 } 00:23:12.515 ], 00:23:12.515 "core_count": 1 00:23:12.515 } 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:12.516 | select(.opcode=="crc32c") 00:23:12.516 | "\(.module_name) \(.executed)"' 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94831 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94831 ']' 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94831 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94831 00:23:12.516 killing process with pid 94831 00:23:12.516 Received shutdown signal, test time was about 2.000000 seconds 00:23:12.516 00:23:12.516 Latency(us) 00:23:12.516 [2024-11-06T13:52:53.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.516 [2024-11-06T13:52:53.449Z] =================================================================================================================== 00:23:12.516 [2024-11-06T13:52:53.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94831' 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94831 00:23:12.516 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94831 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94509 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94509 ']' 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94509 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94509 00:23:12.775 killing process with pid 94509 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94509' 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94509 00:23:12.775 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94509 00:23:13.035 00:23:13.035 real 0m18.633s 00:23:13.035 user 0m34.553s 00:23:13.035 sys 0m5.330s 00:23:13.035 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:13.035 ************************************ 00:23:13.035 END TEST nvmf_digest_clean 00:23:13.035 ************************************ 00:23:13.035 13:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:13.294 ************************************ 00:23:13.294 START TEST nvmf_digest_error 00:23:13.294 ************************************ 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94945 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94945 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 94945 ']' 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:13.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:13.294 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:13.294 [2024-11-06 13:52:54.096105] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:13.294 [2024-11-06 13:52:54.096178] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.553 [2024-11-06 13:52:54.246978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.553 [2024-11-06 13:52:54.305114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.553 [2024-11-06 13:52:54.305166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.553 [2024-11-06 13:52:54.305176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.553 [2024-11-06 13:52:54.305184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.553 [2024-11-06 13:52:54.305191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.553 [2024-11-06 13:52:54.305589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.121 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:14.121 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:23:14.121 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.121 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.121 13:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:14.121 [2024-11-06 13:52:55.041478] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.121 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:14.381 null0 00:23:14.381 [2024-11-06 13:52:55.190655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.381 [2024-11-06 13:52:55.214763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94995 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94995 /var/tmp/bperf.sock 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 94995 ']' 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:14.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:14.381 13:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:14.381 [2024-11-06 13:52:55.281059] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:14.381 [2024-11-06 13:52:55.281131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94995 ] 00:23:14.640 [2024-11-06 13:52:55.417835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.640 [2024-11-06 13:52:55.494223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:15.577 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:15.835 nvme0n1 00:23:15.835 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:15.835 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.835 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:15.835 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.835 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:15.835 13:52:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:16.095 Running I/O for 2 seconds... 00:23:16.095 [2024-11-06 13:52:56.832937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.832989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.833003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.843096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.843129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.843157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.853125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.853156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.853166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.863282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.863315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.863326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.873288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.873320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.873331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.882464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.882495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.882522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.892733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.892765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.892791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.903755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.903788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.903816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.913289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.913322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.913334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.923513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.923546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.095 [2024-11-06 13:52:56.923557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.095 [2024-11-06 13:52:56.933565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.095 [2024-11-06 13:52:56.933599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:56.933610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:56.944248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:56.944280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:56.944307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:56.953648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:56.953682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:56.953692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:56.963966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:56.963997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:56.964023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:56.973703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:56.973734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:56.973760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:56.983372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:56.983405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:56.983432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:56.993390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:56.993424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:56.993450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:57.004355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:57.004388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:57.004399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:57.014322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:57.014353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:57.014379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.096 [2024-11-06 13:52:57.024927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.096 [2024-11-06 13:52:57.024960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.096 [2024-11-06 13:52:57.024970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.355 [2024-11-06 13:52:57.035302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.355 [2024-11-06 13:52:57.035334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.355 [2024-11-06 13:52:57.035345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.355 [2024-11-06 13:52:57.045735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.355 [2024-11-06 13:52:57.045768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.355 [2024-11-06 13:52:57.045779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.355 [2024-11-06 13:52:57.056251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.355 [2024-11-06 13:52:57.056283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.355 [2024-11-06 13:52:57.056309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.355 [2024-11-06 13:52:57.065233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.355 [2024-11-06 13:52:57.065264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.355 [2024-11-06 13:52:57.065274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.355 [2024-11-06 13:52:57.077478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.077512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.077538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.087506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.087538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.087549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.097804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.097837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.097848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.106620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.106650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.106659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.117683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.117717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.117728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.129077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.129109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.129136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.139292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.139322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.139348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.149648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.149681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.149692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.159730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.159761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.159788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.170180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.170210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.170236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.180830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.180870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.180897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.190890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.190922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.190933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.199893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.199925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.199951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.209976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.210006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.210017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.219937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.219969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.219995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.230137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.230169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.230179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.240748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.240779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.240805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.252017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.252049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.252060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.260948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.260980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.260991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.271461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.271494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.271505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.356 [2024-11-06 13:52:57.282451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.356 [2024-11-06 13:52:57.282484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.356 [2024-11-06 13:52:57.282495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.292984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.293027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.293038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.303129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.303160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.303186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.313704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.313738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.313748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.324191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.324223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.324234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.334039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.334072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.334083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.344388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.344420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.344431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.354965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.355006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.355016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.365529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.365562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.365573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.376214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.376247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.376258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.385445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.385478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.385505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.395099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.395132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.395143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.405401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.405436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.405446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.415605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.415637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.415649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.425865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.425909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.425920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.435973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.436004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.436015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.446049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.446080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.446090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.457144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.457177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.457188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.467043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.467072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.467098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.477066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.477098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.477108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.487744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.487777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.487804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.497985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.498017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.498027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.508278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.616 [2024-11-06 13:52:57.508312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.616 [2024-11-06 13:52:57.508323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.616 [2024-11-06 13:52:57.518847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.617 [2024-11-06 13:52:57.518890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.617 [2024-11-06 13:52:57.518901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.617 [2024-11-06 13:52:57.529457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.617 [2024-11-06 13:52:57.529488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.617 [2024-11-06 13:52:57.529514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.617 [2024-11-06 13:52:57.539876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.617 [2024-11-06 13:52:57.539908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.617 [2024-11-06 13:52:57.539935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.549332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.549364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.549376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.559989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.560022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.560034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.570263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.570297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.570307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.580573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.580606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.580617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.590897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.590928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.590939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.601174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.601206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.601217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.611660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.611692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.611704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.621979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.622012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.622023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.632330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.632361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.632388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.876 [2024-11-06 13:52:57.642564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.876 [2024-11-06 13:52:57.642595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.876 [2024-11-06 13:52:57.642606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.652902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.652930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.652940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.663175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.663213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.663223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.673549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.673581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.673591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.684454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.684487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.684513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.694840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.694881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.694909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.705129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.705160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.715996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.716028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.716040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.726193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.726224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.726235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.736493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.736528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.736538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.746771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.746802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.746828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.757544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.757576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.757602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.765628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.765658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.765684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.776453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.776487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.776513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.786838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.786895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.786906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.797636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.797667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.797693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.877 [2024-11-06 13:52:57.808321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:16.877 [2024-11-06 13:52:57.808356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.877 [2024-11-06 13:52:57.808367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 24510.00 IOPS, 95.74 MiB/s [2024-11-06T13:52:58.070Z] [2024-11-06 13:52:57.819031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.819061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.819087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 [2024-11-06 13:52:57.829561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.829592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.829619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 [2024-11-06 13:52:57.839868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.839905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.839916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 [2024-11-06 13:52:57.850197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.850229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.850239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 [2024-11-06 13:52:57.860564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.860594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.860605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 [2024-11-06 13:52:57.871580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.871613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.871624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 [2024-11-06 13:52:57.880011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.880044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.880056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 [2024-11-06 13:52:57.890953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.890984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.890995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.137 [2024-11-06 13:52:57.901368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.137 [2024-11-06 13:52:57.901399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.137 [2024-11-06 13:52:57.901425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.911951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.911982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.912008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.922176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.922209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.922221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.932419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.932450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.932461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.942924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.942957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.942967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.953061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.953089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.953114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.963420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.963453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.963479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.973487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.973517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.973543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.983927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.983959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.983985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:57.994237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:57.994268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:57.994293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:58.002997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:58.003026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:58.003052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:58.015365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:58.015396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:58.015422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:58.025405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:58.025438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:58.025448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:58.034164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:58.034193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:58.034219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:58.044817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:58.044849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:58.044883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:58.054943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:58.054976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:58.054986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.138 [2024-11-06 13:52:58.065525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.138 [2024-11-06 13:52:58.065557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.138 [2024-11-06 13:52:58.065569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.076007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.076041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.076051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.086200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.086230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.086256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.096486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.096515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.096525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.106966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.106995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.107006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.117613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.117643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.117653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.127952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.127984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.128010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.137327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.137358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.137384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.147318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.147350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.147377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.157501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.157533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.157544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.168254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.168285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.168296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.178617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.178646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.178656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.188639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.188671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.188682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.198670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.198699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.198709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.209223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.209252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.209279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.219574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.219607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.219618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.230263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.230293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.230318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.237845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.237888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.237899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.249990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.250024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.250034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.260476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.260520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.260531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.270965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.270997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.271008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.281325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.281353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.281364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.291642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.291672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.291683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.301966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.301998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.302009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.312278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.312310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.312321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.398 [2024-11-06 13:52:58.322741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.398 [2024-11-06 13:52:58.322771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.398 [2024-11-06 13:52:58.322797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.333364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.333396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.333407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.343448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.343480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.343491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.353504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.353534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.353559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.364195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.364228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.364239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.374787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.374818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.374844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.384943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.384974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.384985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.393646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.393676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.393702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.403856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.403896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.403923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.658 [2024-11-06 13:52:58.415147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.658 [2024-11-06 13:52:58.415177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.658 [2024-11-06 13:52:58.415209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.425857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.425898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.425909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.436190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.436222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.436233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.445053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.445085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.445095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.455382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.455413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.455424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.465575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.465608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.465619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.477380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.477414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.477424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.486550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.486581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.486607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.497247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.497281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.497291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.507799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.507833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.507859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.518822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.518854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.518890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.528850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.528903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.528914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.539324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.539356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.539382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.549845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.549884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.549911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.560291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.560323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.560334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.570839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.570883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.570894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.581162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.581195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.581206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.659 [2024-11-06 13:52:58.590177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.659 [2024-11-06 13:52:58.590207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.659 [2024-11-06 13:52:58.590218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.600532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.600565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.600576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.610616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.610647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.610673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.620750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.620784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.620795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.631114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.631143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.631153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.641925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.641959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.641970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.651544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.651576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.651587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.661754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.661787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.661813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.671809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.671840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.671866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.681844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.681886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.681897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.691983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.692013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.692039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.702002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.702034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.702045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.711996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.712027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.712053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.722949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.722982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.722993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.919 [2024-11-06 13:52:58.733883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.919 [2024-11-06 13:52:58.733914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.919 [2024-11-06 13:52:58.733925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.920 [2024-11-06 13:52:58.744705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.920 [2024-11-06 13:52:58.744738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.920 [2024-11-06 13:52:58.744748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.920 [2024-11-06 13:52:58.755484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.920 [2024-11-06 13:52:58.755516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.920 [2024-11-06 13:52:58.755527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.920 [2024-11-06 13:52:58.765957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.920 [2024-11-06 13:52:58.765988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.920 [2024-11-06 13:52:58.766014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.920 [2024-11-06 13:52:58.775106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.920 [2024-11-06 13:52:58.775139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.920 [2024-11-06 13:52:58.775150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.920 [2024-11-06 13:52:58.785477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.920 [2024-11-06 13:52:58.785509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.920 [2024-11-06 13:52:58.785536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.920 [2024-11-06 13:52:58.795820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.920 [2024-11-06 13:52:58.795853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.920 [2024-11-06 13:52:58.795873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.920 [2024-11-06 13:52:58.806036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb3e190) 00:23:17.920 [2024-11-06 13:52:58.806067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.920 [2024-11-06 13:52:58.806093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.920 24623.00 IOPS, 96.18 MiB/s 00:23:17.920 Latency(us) 00:23:17.920 [2024-11-06T13:52:58.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.920 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:17.920 nvme0n1 : 2.01 24640.45 96.25 0.00 0.00 5189.71 2513.53 16844.59 00:23:17.920 [2024-11-06T13:52:58.853Z] =================================================================================================================== 00:23:17.920 [2024-11-06T13:52:58.853Z] Total : 24640.45 96.25 0.00 0.00 5189.71 2513.53 16844.59 00:23:17.920 { 00:23:17.920 "results": [ 00:23:17.920 { 00:23:17.920 "job": "nvme0n1", 00:23:17.920 "core_mask": "0x2", 00:23:17.920 "workload": "randread", 00:23:17.920 "status": "finished", 00:23:17.920 "queue_depth": 128, 00:23:17.920 "io_size": 4096, 00:23:17.920 "runtime": 2.005726, 00:23:17.920 "iops": 24640.4543791126, 00:23:17.920 "mibps": 96.2517749184086, 00:23:17.920 "io_failed": 0, 00:23:17.920 "io_timeout": 0, 00:23:17.920 "avg_latency_us": 5189.706947737533, 00:23:17.920 "min_latency_us": 2513.5293172690763, 00:23:17.920 "max_latency_us": 16844.59437751004 00:23:17.920 } 00:23:17.920 ], 00:23:17.920 "core_count": 1 00:23:17.920 } 00:23:17.920 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:17.920 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:17.920 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:17.920 13:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:17.920 | .driver_specific 00:23:17.920 | .nvme_error 00:23:17.920 | .status_code 00:23:17.920 | .command_transient_transport_error' 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94995 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 94995 ']' 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 94995 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94995 00:23:18.179 killing process with pid 94995 00:23:18.179 Received shutdown signal, test time was about 2.000000 seconds 00:23:18.179 00:23:18.179 Latency(us) 00:23:18.179 [2024-11-06T13:52:59.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.179 [2024-11-06T13:52:59.112Z] =================================================================================================================== 00:23:18.179 [2024-11-06T13:52:59.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94995' 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 94995 00:23:18.179 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 94995 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95085 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95085 /var/tmp/bperf.sock 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 95085 ']' 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:18.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:18.747 13:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:18.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:18.747 Zero copy mechanism will not be used. 00:23:18.747 [2024-11-06 13:52:59.459934] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:18.747 [2024-11-06 13:52:59.460012] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95085 ] 00:23:18.747 [2024-11-06 13:52:59.598810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.747 [2024-11-06 13:52:59.662908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.680 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:19.680 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:23:19.680 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:19.681 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:19.681 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:19.681 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.681 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:19.681 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.681 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:19.681 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:19.938 nvme0n1 00:23:20.199 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:20.199 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.199 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:20.199 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.199 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:20.199 13:53:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:20.199 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:20.199 Zero copy mechanism will not be used. 00:23:20.199 Running I/O for 2 seconds... 00:23:20.199 [2024-11-06 13:53:00.998118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:00.998185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:00.998199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.002395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.002438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.002467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.004975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.005012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.005023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.008188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.008226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.008252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.012098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.012136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.012163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.015443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.015478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.015489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.019037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.019073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.019084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.021265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.021298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.021309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.024818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.024852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.024874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.028235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.028270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.028281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.031435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.031466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.031492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.034651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.034685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.034711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.037872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.037917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.037928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.040683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.040719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.040746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.043582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.043618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.043630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.046634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.046672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.046683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.049729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.049767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.049778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.052290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.052325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.052336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.055995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.056030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.056042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.059024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.059061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.059072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.061716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.061752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.061763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.199 [2024-11-06 13:53:01.064848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.199 [2024-11-06 13:53:01.064908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.199 [2024-11-06 13:53:01.064920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.067141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.067177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.067188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.070446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.070484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.070496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.073010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.073045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.073057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.075944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.075990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.076000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.079098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.079134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.079144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.081704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.081740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.081751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.084964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.084998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.085009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.088727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.088764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.088776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.092186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.092222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.092233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.094594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.094628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.094639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.098184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.098222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.098233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.101811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.101848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.101871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.105514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.105549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.105575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.109105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.109143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.109154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.112735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.112772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.112784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.115820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.115856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.115878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.118252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.118284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.118311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.121743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.121778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.121789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.125577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.125611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.125638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.200 [2024-11-06 13:53:01.129306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.200 [2024-11-06 13:53:01.129341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.200 [2024-11-06 13:53:01.129352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.460 [2024-11-06 13:53:01.131793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.460 [2024-11-06 13:53:01.131827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-06 13:53:01.131838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.460 [2024-11-06 13:53:01.134822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.460 [2024-11-06 13:53:01.134872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.460 [2024-11-06 13:53:01.134884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.460 [2024-11-06 13:53:01.138140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.138176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.138187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.140593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.140628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.140639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.143826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.143870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.143881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.146943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.146971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.146982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.149534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.149571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.149581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.151970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.152004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.152015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.155177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.155218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.155229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.158723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.158758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.158769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.162421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.162458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.162469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.165960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.165993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.166019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.169679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.169715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.169727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.173323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.173362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.173373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.176747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.176784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.176795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.180202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.180238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.180249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.183728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.183764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.183776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.187409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.187443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.187453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.190067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.190102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.190113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.193082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.193119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.193130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.196237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.196272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.196283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.198584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.198619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.198630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.201548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.201584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.201595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.204480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.204517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.204529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.207354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.207391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.207402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.210589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.210626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.210637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.213146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.213180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.213191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.216257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.216293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.216304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.461 [2024-11-06 13:53:01.219941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.461 [2024-11-06 13:53:01.219977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.461 [2024-11-06 13:53:01.219987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.223280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.223315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.223326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.225647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.225682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.225693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.229481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.229517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.229529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.233027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.233063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.233074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.235570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.235603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.235614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.238755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.238791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.238801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.242578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.242615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.242641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.246402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.246437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.246463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.249098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.249130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.249141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.252257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.252293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.252304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.256052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.256089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.256100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.259238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.259272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.259283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.261651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.261684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.261695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.265156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.265191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.265218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.268889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.268922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.268933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.272382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.272416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.272443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.274422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.274456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.274467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.277660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.277695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.277706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.280859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.280902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.280912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.283435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.283471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.283482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.286564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.286600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.286611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.290389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.290425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.290435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.294054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.294089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.294099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.296674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.296707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.296717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.299926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.299959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.299970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.303522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.303557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.303568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.462 [2024-11-06 13:53:01.307024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.462 [2024-11-06 13:53:01.307059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-11-06 13:53:01.307070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.310565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.310600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.310626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.314173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.314208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.314219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.317787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.317822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.317848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.321400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.321436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.321446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.324984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.325015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.325026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.328612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.328647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.328657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.332045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.332080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.332090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.335393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.335427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.335438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.338887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.338918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.338944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.342426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.342460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.342486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.345938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.345970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.345981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.349455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.349488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.349513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.353029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.353063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.353074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.356485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.356530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.356541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.360088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.360123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.360134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.363368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.363402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.363412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.366007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.366041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.366052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.368893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.368927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.368938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.371971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.372006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.372017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.374379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.374413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.374424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.377349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.377384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.377411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.380070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.380106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.380117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.383078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.383113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.383124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.386038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.386072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.386083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.389238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.389276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-11-06 13:53:01.391569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.463 [2024-11-06 13:53:01.391605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-11-06 13:53:01.391616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.394708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.394744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.394755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.397976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.398012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.398023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.400452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.400488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.400498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.403377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.403411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.403422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.406816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.406852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.406876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.410561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.410598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.410609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.413229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.413261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.413272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.416352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.416386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.416413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.419799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.419835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.419846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.422432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.422466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.422477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.425635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.425671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.425683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.428660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.428694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.730 [2024-11-06 13:53:01.428719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.730 [2024-11-06 13:53:01.431142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.730 [2024-11-06 13:53:01.431176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.431186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.434327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.434363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.434374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.437671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.437706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.437732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.440050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.440084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.440095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.443329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.443364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.443374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.447047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.447079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.447105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.450439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.450473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.450483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.453986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.454018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.454028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.457707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.457741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.457752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.461259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.461294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.461305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.464763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.464795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.464820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.468199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.468234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.468245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.471615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.471651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.471662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.475154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.475186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.475204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.478801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.478835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.478845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.482411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.482444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.482470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.486002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.486033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.486044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.489488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.489521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.489531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.493036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.493085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.493096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.496666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.496701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.496711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.500337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.731 [2024-11-06 13:53:01.500372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.731 [2024-11-06 13:53:01.500382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.731 [2024-11-06 13:53:01.503750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.503785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.503795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.507108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.507140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.507150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.510730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.510765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.510776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.514414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.514449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.514460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.517116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.517147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.517174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.520293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.520339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.520350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.523735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.523772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.523784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.527158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.527199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.527211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.530750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.530785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.530795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.534282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.534317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.534328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.537538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.537573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.537599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.541168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.541203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.541229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.544257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.544290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.544301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.546531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.546566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.546576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.549752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.549787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.549813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.553466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.553501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.553511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.557041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.557075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.557087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.560672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.560705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.560732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.564217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.564252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.564263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.567605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.567640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.567651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.571028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.571060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.732 [2024-11-06 13:53:01.571085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.732 [2024-11-06 13:53:01.574629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.732 [2024-11-06 13:53:01.574664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.574690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.578227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.578262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.578272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.581564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.581598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.581624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.585295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.585329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.585356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.588951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.588984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.589010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.592510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.592545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.592557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.596144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.596178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.596189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.599602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.599638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.599649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.603144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.603177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.603187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.606679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.606713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.606724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.610153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.610186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.610212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.613687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.613722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.613732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.617282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.617314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.617340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.620468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.620501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.620527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.623017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.623048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.623074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.626280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.626314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.626325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.629486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.629520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.629546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.631754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.631789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.631800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.635375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.635410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.635421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.639101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.639134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.639160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.642856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.733 [2024-11-06 13:53:01.642913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.733 [2024-11-06 13:53:01.642924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.733 [2024-11-06 13:53:01.645527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.734 [2024-11-06 13:53:01.645560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.734 [2024-11-06 13:53:01.645570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.734 [2024-11-06 13:53:01.648751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.734 [2024-11-06 13:53:01.648787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.734 [2024-11-06 13:53:01.648797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.734 [2024-11-06 13:53:01.652434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.734 [2024-11-06 13:53:01.652469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.734 [2024-11-06 13:53:01.652479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.734 [2024-11-06 13:53:01.655925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:20.734 [2024-11-06 13:53:01.655960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.734 [2024-11-06 13:53:01.655971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.659501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.659537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.659547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.663141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.663178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.663188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.666497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.666533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.666544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.670097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.670133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.670143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.673492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.673527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.673538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.676862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.676921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.676932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.680136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.680172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.680184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.683698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.683734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.683745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.687333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.687369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.687380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.690904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.690935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.690960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.694561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.694594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.694620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.698137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.698170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.698197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.701702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.701738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.701749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.705350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.705385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.705397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.708942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.708976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.708986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.712421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.712456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.712467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.715998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.716032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.716043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.719488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.719524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.719536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.723024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.026 [2024-11-06 13:53:01.723057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.026 [2024-11-06 13:53:01.723082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.026 [2024-11-06 13:53:01.726304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.726337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.726363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.728641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.728673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.728699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.731945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.731979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.731989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.735170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.735210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.735237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.737635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.737672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.737683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.741405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.741440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.741451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.745245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.745291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.748771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.748805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.748831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.750944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.750970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.750980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.754489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.754523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.754548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.757824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.757872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.757883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.761410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.761444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.761455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.764963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.764996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.765022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.768604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.768637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.768663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.772059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.772094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.772104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.775467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.775502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.775513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.778971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.779002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.779011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.782483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.782518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.782529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.785851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.785897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.785908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.789329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.789364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.789374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.792771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.792805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.792816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.796493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.796530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.796540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.800041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.800077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.800088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.803288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.803324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.803335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.805734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.805769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.805780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.808783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.808817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.808843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.811656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.027 [2024-11-06 13:53:01.811691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.027 [2024-11-06 13:53:01.811717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.027 [2024-11-06 13:53:01.814101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.814134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.814145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.817196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.817229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.817255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.819733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.819768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.819778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.823085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.823121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.823131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.825484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.825517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.825528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.828448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.828483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.828509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.831839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.831884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.831896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.834043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.834076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.834086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.837035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.837068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.837094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.840355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.840390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.840400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.842668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.842704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.842714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.846193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.846228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.846239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.849794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.849827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.849853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.852391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.852436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.852446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.855610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.855645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.855657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.859175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.859214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.859240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.862727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.862761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.862772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.866279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.866313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.866324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.869863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.869922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.869933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.873454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.873489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.873499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.876911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.876945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.876955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.880446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.880482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.880493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.883945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.883980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.883991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.887314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.887349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.887360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.890364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.890396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.890407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.893716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.893751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.893761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.897219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.897253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.028 [2024-11-06 13:53:01.897278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.028 [2024-11-06 13:53:01.900816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.028 [2024-11-06 13:53:01.900852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.900889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.904385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.904421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.904432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.907947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.907987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.907997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.911406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.911442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.911453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.914875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.914908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.914918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.918343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.918380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.918390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.921716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.921750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.921760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.925316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.925348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.925374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.928842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.928899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.928910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.931470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.931503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.931514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.934387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.934422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.934432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.937937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.937971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.937981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.940249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.940293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.940304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.943509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.943545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.943555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.947154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.947196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.947206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.950668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.950702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.950713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.029 [2024-11-06 13:53:01.954312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.029 [2024-11-06 13:53:01.954348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.029 [2024-11-06 13:53:01.954358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.290 [2024-11-06 13:53:01.957854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.290 [2024-11-06 13:53:01.957901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.290 [2024-11-06 13:53:01.957912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.961429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.961463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.961489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.964997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.965033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.965044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.968496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.968531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.968541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.972088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.972124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.972134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.975628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.975664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.975674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.979108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.979141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.979151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.982584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.982619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.982629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.986069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.986102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.986112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.291 9373.00 IOPS, 1171.62 MiB/s [2024-11-06T13:53:02.224Z] [2024-11-06 13:53:01.990970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.991014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.991025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.994292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.994327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.994337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:01.997789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:01.997824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:01.997835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.001325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.001359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.001384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.004885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.004920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.004930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.007937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.007972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.007982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.011155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.011198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.011209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.013299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.013335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.013345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.017126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.017159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.017185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.020752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.020785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.020795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.024287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.024322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.024343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.026374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.026406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.026416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.029922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.029954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.029965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.033286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.033321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.033331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.035646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.035682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.035692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.038727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.038762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.038773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.042015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.042050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.042060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.045387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.045422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.291 [2024-11-06 13:53:02.045432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.291 [2024-11-06 13:53:02.047661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.291 [2024-11-06 13:53:02.047695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.047705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.051049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.051082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.051093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.054624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.054659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.054670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.058240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.058275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.058286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.061617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.061651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.061660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.065122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.065155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.065180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.068548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.068581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.068607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.072078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.072113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.072123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.075611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.075645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.075656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.079075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.079108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.079119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.082510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.082545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.082555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.086105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.086138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.086164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.089499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.089533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.089558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.093125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.093159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.093170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.096795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.096828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.096853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.100219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.100255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.100265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.103680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.103715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.103726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.107266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.107304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.107315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.110834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.110881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.110892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.114317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.114352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.114363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.117459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.117493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.117503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.120860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.120918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.120929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.124009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.124045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.124055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.126796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.126831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.126842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.129708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.129741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.129767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.132386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.132420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.132430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.135317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.135353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.292 [2024-11-06 13:53:02.135364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.292 [2024-11-06 13:53:02.138009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.292 [2024-11-06 13:53:02.138042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.138063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.141166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.141200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.141211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.144362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.144397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.144408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.146797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.146832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.146843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.150381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.150415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.150440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.153926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.153958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.153968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.157454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.157488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.157498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.160992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.161023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.161033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.164506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.164540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.164565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.168105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.168140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.168150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.171606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.171641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.171667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.175123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.175156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.175181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.178544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.178578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.178603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.182094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.182127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.182152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.185542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.185576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.185587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.189173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.189206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.189216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.192755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.192788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.192814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.196261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.196297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.196308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.199833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.199881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.199892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.203349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.203383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.203394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.206861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.206903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.206928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.210393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.210427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.210453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.213627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.213661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.213671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.217234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.217270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.217280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.293 [2024-11-06 13:53:02.220845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.293 [2024-11-06 13:53:02.220889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.293 [2024-11-06 13:53:02.220900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.224467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.224502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.224523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.227870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.227915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.227925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.230131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.230167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.230178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.233792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.233827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.233838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.237168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.237203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.237214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.239272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.239307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.239318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.242825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.242871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.242882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.245753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.245787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.245798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.248731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.248765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.248791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.251554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.251588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.251599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.254432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.254465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.254476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.257509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.257543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.257554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.260515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.260552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.260563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.263895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.263928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.263939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.266269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.266303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.266313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.269566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.269602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.269613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.272154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.272191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.272202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.275411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.275447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.275458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.278295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.278329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.278341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.281635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.281671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.281697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.285231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.285266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.555 [2024-11-06 13:53:02.285278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.555 [2024-11-06 13:53:02.288820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.555 [2024-11-06 13:53:02.288856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.288878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.292148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.292184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.292196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.295718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.295754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.295764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.299354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.299390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.299401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.302855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.302901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.302912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.306472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.306506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.306517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.309882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.309913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.309924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.313231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.313264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.313290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.316408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.316441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.316452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.318884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.318916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.318927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.322641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.322676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.322703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.326150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.326184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.326195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.328599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.328633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.328659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.332257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.332292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.332303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.334724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.334760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.334771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.337753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.337786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.337797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.341574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.341610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.341622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.345234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.345267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.345278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.347496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.347530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.347541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.351161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.351204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.351215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.354831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.354899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.354910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.358400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.358435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.358462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.361837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.361882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.361893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.365498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.365534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.365545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.368862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.368906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.368916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.372466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.372501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.372512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.375714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.375748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.556 [2024-11-06 13:53:02.375758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.556 [2024-11-06 13:53:02.378396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.556 [2024-11-06 13:53:02.378429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.378440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.381516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.381551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.381578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.384286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.384321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.384332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.387296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.387331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.387342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.389932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.389965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.389976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.393228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.393262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.393289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.396519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.396555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.396565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.398834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.398886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.398897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.402618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.402652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.402663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.405710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.405755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.408494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.408528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.408539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.412141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.412177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.412188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.415961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.415995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.416006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.418622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.418656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.418666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.421657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.421693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.421703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.424998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.425045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.425056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.427377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.427411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.427422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.430901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.430934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.430945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.434408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.434443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.434453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.436718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.436753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.436764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.439585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.439620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.439631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.442767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.442801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.442812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.445481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.445516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.445527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.447978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.448011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.448022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.450933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.450965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.450987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.453890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.453922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.453948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.456972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.457005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.557 [2024-11-06 13:53:02.457016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.557 [2024-11-06 13:53:02.459673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.557 [2024-11-06 13:53:02.459707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.459718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.558 [2024-11-06 13:53:02.462777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.558 [2024-11-06 13:53:02.462810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.462836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.558 [2024-11-06 13:53:02.465943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.558 [2024-11-06 13:53:02.465976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.465987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.558 [2024-11-06 13:53:02.468660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.558 [2024-11-06 13:53:02.468694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.468704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.558 [2024-11-06 13:53:02.471475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.558 [2024-11-06 13:53:02.471510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.471521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.558 [2024-11-06 13:53:02.474271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.558 [2024-11-06 13:53:02.474304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.474330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.558 [2024-11-06 13:53:02.477435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.558 [2024-11-06 13:53:02.477469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.477496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.558 [2024-11-06 13:53:02.480435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.558 [2024-11-06 13:53:02.480471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.480482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.558 [2024-11-06 13:53:02.482925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.558 [2024-11-06 13:53:02.482958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.558 [2024-11-06 13:53:02.482968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.486404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.486439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.486450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.489504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.489538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.489548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.491811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.491847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.491869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.495304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.495340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.495350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.498915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.498947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.498973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.502329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.502362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.502388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.505757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.505793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.505803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.509381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.509418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.509429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.512877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.512911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.512921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.516247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.516282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.516293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.519911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.519944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.519955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.523105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.523139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.523149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.526549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.526585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.526596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.529775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.529810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.529821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.532042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.532078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.532089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.535730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.535767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.535778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.539533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.539571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.539583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.543324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.543360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.543371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.546028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.546069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.546085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.549201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.549236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.549247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.553011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.553049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.553061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.555588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.555625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.555636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.558654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.558692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.558703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.562032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.562068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.562080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.564726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.564761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.564772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.567715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.820 [2024-11-06 13:53:02.567751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.820 [2024-11-06 13:53:02.567762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.820 [2024-11-06 13:53:02.570828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.570879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.570891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.573513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.573550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.573560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.576680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.576714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.576725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.579343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.579376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.579387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.582565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.582602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.582613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.586369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.586406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.586417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.590157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.590194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.590205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.592823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.592868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.592880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.596027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.596062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.596073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.599714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.599751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.599762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.602391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.602423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.602434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.605627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.605663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.605674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.609283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.609318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.609329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.612893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.612924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.612935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.614960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.614990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.615000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.618741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.618776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.618787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.622168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.622202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.622213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.624316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.624349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.624359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.627602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.627639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.627649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.630812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.630847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.630870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.633606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.633640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.633652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.636018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.636054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.636064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.639119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.639156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.639167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.642364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.642401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.642412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.646050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.646086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.646097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.648625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.648658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.648669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.651984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.821 [2024-11-06 13:53:02.652020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.821 [2024-11-06 13:53:02.652031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.821 [2024-11-06 13:53:02.655596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.655632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.655643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.659251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.659286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.659297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.662629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.662664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.662675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.665138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.665170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.665196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.668348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.668383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.668395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.672001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.672036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.672047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.675575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.675611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.675622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.678993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.679038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.679064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.682404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.682439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.682450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.685560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.685593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.685604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.688142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.688179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.688190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.691497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.691531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.691542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.695054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.695089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.695100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.698642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.698675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.698686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.702151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.702185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.702211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.705437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.705471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.705482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.708471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.708505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.708516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.710854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.710899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.710910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.714615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.714649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.714675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.718286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.718321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.718332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.721704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.721738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.721749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.724010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.724044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.724055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.727813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.727849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.727871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.731629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.731666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.731677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.735161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.735204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.735215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.737449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.737483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.737509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.741114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.741151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.822 [2024-11-06 13:53:02.741163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.822 [2024-11-06 13:53:02.744883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.822 [2024-11-06 13:53:02.744915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.823 [2024-11-06 13:53:02.744942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.823 [2024-11-06 13:53:02.747575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:21.823 [2024-11-06 13:53:02.747608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.823 [2024-11-06 13:53:02.747620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.084 [2024-11-06 13:53:02.750790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.084 [2024-11-06 13:53:02.750824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.084 [2024-11-06 13:53:02.750835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.084 [2024-11-06 13:53:02.753528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.084 [2024-11-06 13:53:02.753562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.084 [2024-11-06 13:53:02.753589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.084 [2024-11-06 13:53:02.756669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.084 [2024-11-06 13:53:02.756704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.084 [2024-11-06 13:53:02.756715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.084 [2024-11-06 13:53:02.759095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.084 [2024-11-06 13:53:02.759131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.084 [2024-11-06 13:53:02.759142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.084 [2024-11-06 13:53:02.762194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.084 [2024-11-06 13:53:02.762229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.084 [2024-11-06 13:53:02.762240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.084 [2024-11-06 13:53:02.765074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.084 [2024-11-06 13:53:02.765120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.084 [2024-11-06 13:53:02.765147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.084 [2024-11-06 13:53:02.767675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.084 [2024-11-06 13:53:02.767710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.767721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.770112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.770145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.770171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.773169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.773203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.773230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.776477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.776512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.776522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.778921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.778953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.778974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.782058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.782092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.782119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.784701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.784737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.784747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.787813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.787849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.787871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.790469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.790503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.790514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.793370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.793404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.793430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.796479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.796511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.796537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.799025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.799058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.799083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.802015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.802050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.802077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.804732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.804766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.804792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.807442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.807477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.807488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.810701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.810734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.810745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.814021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.814054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.814065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.816216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.816247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.816258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.819677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.819709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.819719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.821947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.821979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.821990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.825076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.825108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.825118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.828006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.828040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.828051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.830754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.830790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.830801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.833860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.833904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.833930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.836343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.836377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.836404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.839419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.839454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.839465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.842476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.842509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.085 [2024-11-06 13:53:02.842535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.085 [2024-11-06 13:53:02.845302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.085 [2024-11-06 13:53:02.845338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.845349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.848136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.848171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.848182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.851287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.851323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.851334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.853729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.853762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.853773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.857040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.857074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.857085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.860707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.860743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.860754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.864195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.864232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.864243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.866772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.866804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.866830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.869975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.870006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.870032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.873374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.873408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.873433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.876778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.876811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.876838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.880226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.880261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.880287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.883283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.883314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.883340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.886796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.886831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.886841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.890167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.890199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.890225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.893687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.893720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.893746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.897105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.897137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.897162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.900686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.900722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.900732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.903248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.903280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.903306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.906156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.906190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.906201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.909771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.909808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.909818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.912356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.912389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.912400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.915529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.915565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.915576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.919265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.919299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.919310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.922478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.922511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.922522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.924708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.924744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.924755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.928455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.928492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.086 [2024-11-06 13:53:02.928503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.086 [2024-11-06 13:53:02.932035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.086 [2024-11-06 13:53:02.932069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.934139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.934172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.934182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.937245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.937279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.937289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.940396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.940432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.940443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.942974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.943007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.943017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.946238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.946272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.946299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.949874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.949909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.949919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.953347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.953381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.953407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.955594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.955627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.955638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.958796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.958830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.958856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.961500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.961533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.961559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.964494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.964528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.964555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.967189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.967246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.967256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.970071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.970103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.970129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.972640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.972673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.972684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.975798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.975833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.975844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.978781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.978815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.978841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.981376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.981410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.981436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.087 [2024-11-06 13:53:02.984558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bfde20) 00:23:22.087 [2024-11-06 13:53:02.984593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.087 [2024-11-06 13:53:02.984603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.087 9569.00 IOPS, 1196.12 MiB/s 00:23:22.087 Latency(us) 00:23:22.087 [2024-11-06T13:53:03.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.087 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:22.087 nvme0n1 : 2.00 9567.65 1195.96 0.00 0.00 1669.47 486.91 7053.67 00:23:22.087 [2024-11-06T13:53:03.020Z] =================================================================================================================== 00:23:22.087 [2024-11-06T13:53:03.020Z] Total : 9567.65 1195.96 0.00 0.00 1669.47 486.91 7053.67 00:23:22.087 { 00:23:22.087 "results": [ 00:23:22.087 { 00:23:22.087 "job": "nvme0n1", 00:23:22.087 "core_mask": "0x2", 00:23:22.087 "workload": "randread", 00:23:22.087 "status": "finished", 00:23:22.087 "queue_depth": 16, 00:23:22.087 "io_size": 131072, 00:23:22.087 "runtime": 2.001955, 00:23:22.087 "iops": 9567.647624447103, 00:23:22.087 "mibps": 1195.955953055888, 00:23:22.087 "io_failed": 0, 00:23:22.087 "io_timeout": 0, 00:23:22.087 "avg_latency_us": 1669.4743525841907, 00:23:22.087 "min_latency_us": 486.9140562248996, 00:23:22.087 "max_latency_us": 7053.673895582329 00:23:22.087 } 00:23:22.087 ], 00:23:22.087 "core_count": 1 00:23:22.087 } 00:23:22.087 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:22.347 | .driver_specific 00:23:22.347 | .nvme_error 00:23:22.347 | .status_code 00:23:22.347 | .command_transient_transport_error' 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 617 > 0 )) 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95085 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 95085 ']' 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 95085 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:22.347 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95085 00:23:22.606 killing process with pid 95085 00:23:22.606 Received shutdown signal, test time was about 2.000000 seconds 00:23:22.606 00:23:22.606 Latency(us) 00:23:22.606 [2024-11-06T13:53:03.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.606 [2024-11-06T13:53:03.539Z] =================================================================================================================== 00:23:22.606 [2024-11-06T13:53:03.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.606 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:22.606 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:22.606 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95085' 00:23:22.606 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 95085 00:23:22.606 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 95085 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95175 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95175 /var/tmp/bperf.sock 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 95175 ']' 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:22.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:22.866 13:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:22.866 [2024-11-06 13:53:03.629166] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:22.866 [2024-11-06 13:53:03.629242] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95175 ] 00:23:22.866 [2024-11-06 13:53:03.782702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.125 [2024-11-06 13:53:03.843420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.691 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:23.691 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:23:23.691 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:23.691 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:23.950 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:23.950 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.950 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:23.950 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.950 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:23.950 13:53:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:24.209 nvme0n1 00:23:24.209 13:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:24.209 13:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.209 13:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:24.209 13:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.209 13:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:24.209 13:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:24.469 Running I/O for 2 seconds... 00:23:24.469 [2024-11-06 13:53:05.157195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eee5c8 00:23:24.469 [2024-11-06 13:53:05.157920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.157956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.165888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee23b8 00:23:24.469 [2024-11-06 13:53:05.166458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.166492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.174842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee1b48 00:23:24.469 [2024-11-06 13:53:05.175522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.175552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.185594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee73e0 00:23:24.469 [2024-11-06 13:53:05.186771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.186804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.193408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef1868 00:23:24.469 [2024-11-06 13:53:05.194739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.194772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.202949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ede470 00:23:24.469 [2024-11-06 13:53:05.203689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.203720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.211261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef35f0 00:23:24.469 [2024-11-06 13:53:05.211834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.211875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.219631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee8d30 00:23:24.469 [2024-11-06 13:53:05.220129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.220161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.229714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef35f0 00:23:24.469 [2024-11-06 13:53:05.230795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.230827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.238128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef0ff8 00:23:24.469 [2024-11-06 13:53:05.239085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.239119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.247012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efd208 00:23:24.469 [2024-11-06 13:53:05.247711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.247738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.255386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf550 00:23:24.469 [2024-11-06 13:53:05.256006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.256037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.263702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eef270 00:23:24.469 [2024-11-06 13:53:05.264168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.264200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.273800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee7c50 00:23:24.469 [2024-11-06 13:53:05.274918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.274950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.282226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf118 00:23:24.469 [2024-11-06 13:53:05.283219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.283251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.290668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef20d8 00:23:24.469 [2024-11-06 13:53:05.291535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.469 [2024-11-06 13:53:05.291565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:24.469 [2024-11-06 13:53:05.299007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeff18 00:23:24.469 [2024-11-06 13:53:05.299760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.299792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.307422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efdeb0 00:23:24.470 [2024-11-06 13:53:05.308026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.308057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.318541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee3d08 00:23:24.470 [2024-11-06 13:53:05.319916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.319944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.326923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eea248 00:23:24.470 [2024-11-06 13:53:05.328180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.328211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.335301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef7970 00:23:24.470 [2024-11-06 13:53:05.336415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.336445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.343664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef7da8 00:23:24.470 [2024-11-06 13:53:05.344684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.344715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.354022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ede470 00:23:24.470 [2024-11-06 13:53:05.355507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.355536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.360264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee99d8 00:23:24.470 [2024-11-06 13:53:05.360926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.360956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.371303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeb328 00:23:24.470 [2024-11-06 13:53:05.372671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.372701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.379677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef0788 00:23:24.470 [2024-11-06 13:53:05.380912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.380942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.388100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef2510 00:23:24.470 [2024-11-06 13:53:05.389222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.389253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:24.470 [2024-11-06 13:53:05.396479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef4298 00:23:24.470 [2024-11-06 13:53:05.397486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.470 [2024-11-06 13:53:05.397517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:24.729 [2024-11-06 13:53:05.405083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee12d8 00:23:24.729 [2024-11-06 13:53:05.405973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.729 [2024-11-06 13:53:05.406003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:24.729 [2024-11-06 13:53:05.413447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee12d8 00:23:24.729 [2024-11-06 13:53:05.414220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.729 [2024-11-06 13:53:05.414251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:24.729 [2024-11-06 13:53:05.421847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eea680 00:23:24.729 [2024-11-06 13:53:05.422542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.729 [2024-11-06 13:53:05.422573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:24.729 [2024-11-06 13:53:05.432260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef8a50 00:23:24.729 [2024-11-06 13:53:05.433104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.433135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.440689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef81e0 00:23:24.730 [2024-11-06 13:53:05.441364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.441396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.449091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee6b70 00:23:24.730 [2024-11-06 13:53:05.449676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.449711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.457462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eef6a8 00:23:24.730 [2024-11-06 13:53:05.457904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.457937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.467342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0a68 00:23:24.730 [2024-11-06 13:53:05.468383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.468414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.475651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efcdd0 00:23:24.730 [2024-11-06 13:53:05.476580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.476610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.483686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efe2e8 00:23:24.730 [2024-11-06 13:53:05.484431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.484463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.492235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef7970 00:23:24.730 [2024-11-06 13:53:05.493073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.493104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.502997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eed4e8 00:23:24.730 [2024-11-06 13:53:05.504298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.504330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.509336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee99d8 00:23:24.730 [2024-11-06 13:53:05.509919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.509948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.520065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeff18 00:23:24.730 [2024-11-06 13:53:05.521143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.521173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.528373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee01f8 00:23:24.730 [2024-11-06 13:53:05.529212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.529245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.537103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6458 00:23:24.730 [2024-11-06 13:53:05.537964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.537995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.547776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee4de8 00:23:24.730 [2024-11-06 13:53:05.549155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.549184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.554126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efd640 00:23:24.730 [2024-11-06 13:53:05.554780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.554810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.564158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee12d8 00:23:24.730 [2024-11-06 13:53:05.565087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.565118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.572947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0a68 00:23:24.730 [2024-11-06 13:53:05.573763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.573795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.581371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eddc00 00:23:24.730 [2024-11-06 13:53:05.582024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.582055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.590309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef4f40 00:23:24.730 [2024-11-06 13:53:05.591099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.591130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.598641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eff3c8 00:23:24.730 [2024-11-06 13:53:05.599312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.599342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.607098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efb8b8 00:23:24.730 [2024-11-06 13:53:05.607657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.607688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.618284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeff18 00:23:24.730 [2024-11-06 13:53:05.619567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.619598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.626939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef2510 00:23:24.730 [2024-11-06 13:53:05.628224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.628256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.633240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efb048 00:23:24.730 [2024-11-06 13:53:05.633803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.633833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.643759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eebb98 00:23:24.730 [2024-11-06 13:53:05.644809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.644839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.651980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6cc8 00:23:24.730 [2024-11-06 13:53:05.652756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.652788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:24.730 [2024-11-06 13:53:05.660642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee95a0 00:23:24.730 [2024-11-06 13:53:05.661350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.730 [2024-11-06 13:53:05.661381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.669107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6020 00:23:24.990 [2024-11-06 13:53:05.669698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.669728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.680167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef9b30 00:23:24.990 [2024-11-06 13:53:05.681477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.681509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.688646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee9e10 00:23:24.990 [2024-11-06 13:53:05.689870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.689907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.697042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee38d0 00:23:24.990 [2024-11-06 13:53:05.698137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.698168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.705395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6020 00:23:24.990 [2024-11-06 13:53:05.706392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.706422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.715588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efa3a0 00:23:24.990 [2024-11-06 13:53:05.717049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.717079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.721852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee7c50 00:23:24.990 [2024-11-06 13:53:05.722606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.722636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.732330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0ea0 00:23:24.990 [2024-11-06 13:53:05.733583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.733613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.740697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee2c28 00:23:24.990 [2024-11-06 13:53:05.741649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.990 [2024-11-06 13:53:05.741682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.990 [2024-11-06 13:53:05.749409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee95a0 00:23:24.991 [2024-11-06 13:53:05.750427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.750457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.757772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eea680 00:23:24.991 [2024-11-06 13:53:05.758512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.758544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.766518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee01f8 00:23:24.991 [2024-11-06 13:53:05.767311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.767340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.777243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf118 00:23:24.991 [2024-11-06 13:53:05.778538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.778569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.783591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee2c28 00:23:24.991 [2024-11-06 13:53:05.784157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.784188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.794284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efcdd0 00:23:24.991 [2024-11-06 13:53:05.795356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.795385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.802515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef9b30 00:23:24.991 [2024-11-06 13:53:05.803355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.803387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.811218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf550 00:23:24.991 [2024-11-06 13:53:05.812082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.812113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.821958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efdeb0 00:23:24.991 [2024-11-06 13:53:05.823316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.823345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.828294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee49b0 00:23:24.991 [2024-11-06 13:53:05.828921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.828949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.838899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf988 00:23:24.991 [2024-11-06 13:53:05.840041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.840070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.847147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeb328 00:23:24.991 [2024-11-06 13:53:05.848049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.848082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.855871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf988 00:23:24.991 [2024-11-06 13:53:05.856773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.856803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.866715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee49b0 00:23:24.991 [2024-11-06 13:53:05.868154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.868184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.873099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efdeb0 00:23:24.991 [2024-11-06 13:53:05.873776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.873805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.883782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf550 00:23:24.991 [2024-11-06 13:53:05.884976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.885006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.892081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee7c50 00:23:24.991 [2024-11-06 13:53:05.892995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.893026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.900815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efcdd0 00:23:24.991 [2024-11-06 13:53:05.901795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.901827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.911440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee2c28 00:23:24.991 [2024-11-06 13:53:05.912909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.912946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.991 [2024-11-06 13:53:05.917737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf118 00:23:24.991 [2024-11-06 13:53:05.918499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.991 [2024-11-06 13:53:05.918530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.928469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eea680 00:23:25.251 [2024-11-06 13:53:05.929714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.929744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.936670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee01f8 00:23:25.251 [2024-11-06 13:53:05.937675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.937708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.945418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee95a0 00:23:25.251 [2024-11-06 13:53:05.946459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.946489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.953631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee2c28 00:23:25.251 [2024-11-06 13:53:05.954411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.954443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.962254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0ea0 00:23:25.251 [2024-11-06 13:53:05.963080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.963111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.972754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef9b30 00:23:25.251 [2024-11-06 13:53:05.974095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.974127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.979005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6cc8 00:23:25.251 [2024-11-06 13:53:05.979617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.979647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.989441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef8a50 00:23:25.251 [2024-11-06 13:53:05.990540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.990572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:05.997672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee49b0 00:23:25.251 [2024-11-06 13:53:05.998500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:05.998532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.006332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eed920 00:23:25.251 [2024-11-06 13:53:06.007233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.007264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.016836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeb328 00:23:25.251 [2024-11-06 13:53:06.018237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.018266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.023089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eee190 00:23:25.251 [2024-11-06 13:53:06.023763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.023793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.033585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee8d30 00:23:25.251 [2024-11-06 13:53:06.034745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.034776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.041814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efdeb0 00:23:25.251 [2024-11-06 13:53:06.042741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.042774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.050398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0a68 00:23:25.251 [2024-11-06 13:53:06.051354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.051383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.060932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee7c50 00:23:25.251 [2024-11-06 13:53:06.062376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.062405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.067293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef0bc0 00:23:25.251 [2024-11-06 13:53:06.067869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.067906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.077702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf550 00:23:25.251 [2024-11-06 13:53:06.078921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.078951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.085984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef9b30 00:23:25.251 [2024-11-06 13:53:06.086912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.251 [2024-11-06 13:53:06.086943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:25.251 [2024-11-06 13:53:06.094526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0ea0 00:23:25.251 [2024-11-06 13:53:06.095518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.095548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.105077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee9e10 00:23:25.252 [2024-11-06 13:53:06.106554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.106583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.111357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eff3c8 00:23:25.252 [2024-11-06 13:53:06.112105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.112135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.121810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee01f8 00:23:25.252 [2024-11-06 13:53:06.123093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.123125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.130748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee2c28 00:23:25.252 [2024-11-06 13:53:06.132024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.132053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.139336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ede8a8 00:23:25.252 [2024-11-06 13:53:06.140482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.140512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:25.252 28669.00 IOPS, 111.99 MiB/s [2024-11-06T13:53:06.185Z] [2024-11-06 13:53:06.148110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efac10 00:23:25.252 [2024-11-06 13:53:06.148987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.149019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.157062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0630 00:23:25.252 [2024-11-06 13:53:06.158194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.158225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.165245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0ea0 00:23:25.252 [2024-11-06 13:53:06.166123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.166155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.173919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeb760 00:23:25.252 [2024-11-06 13:53:06.174818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.174847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:25.252 [2024-11-06 13:53:06.182741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef8a50 00:23:25.252 [2024-11-06 13:53:06.183291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.252 [2024-11-06 13:53:06.183323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.191174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef8a50 00:23:25.511 [2024-11-06 13:53:06.191645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.191676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.201501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eebfd0 00:23:25.511 [2024-11-06 13:53:06.202781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.202813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.207760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef9b30 00:23:25.511 [2024-11-06 13:53:06.208307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.208337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.218100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef5378 00:23:25.511 [2024-11-06 13:53:06.219160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.219198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.226110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef0788 00:23:25.511 [2024-11-06 13:53:06.226914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.226945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.234636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee1710 00:23:25.511 [2024-11-06 13:53:06.235490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.235518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.245004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0630 00:23:25.511 [2024-11-06 13:53:06.246338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.246370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.251179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eea680 00:23:25.511 [2024-11-06 13:53:06.251794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.251824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.261754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee23b8 00:23:25.511 [2024-11-06 13:53:06.262881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.511 [2024-11-06 13:53:06.262912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:25.511 [2024-11-06 13:53:06.269770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eefae0 00:23:25.512 [2024-11-06 13:53:06.270627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.270659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.278387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef1ca0 00:23:25.512 [2024-11-06 13:53:06.279296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.279326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.288874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee7818 00:23:25.512 [2024-11-06 13:53:06.290281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.290310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.295061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef4b08 00:23:25.512 [2024-11-06 13:53:06.295736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.295767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.305677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee6738 00:23:25.512 [2024-11-06 13:53:06.306857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.306896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.313705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee84c0 00:23:25.512 [2024-11-06 13:53:06.314633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.314666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.322211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeaef0 00:23:25.512 [2024-11-06 13:53:06.323181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.323219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.331137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eedd58 00:23:25.512 [2024-11-06 13:53:06.332098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.332129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.339465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeea00 00:23:25.512 [2024-11-06 13:53:06.340301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.340332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.347724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee9168 00:23:25.512 [2024-11-06 13:53:06.348435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.348466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.356114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef31b8 00:23:25.512 [2024-11-06 13:53:06.356724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.356754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.366498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efb048 00:23:25.512 [2024-11-06 13:53:06.367280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.374742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee8d30 00:23:25.512 [2024-11-06 13:53:06.375380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.375411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.383122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef1ca0 00:23:25.512 [2024-11-06 13:53:06.383655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.383687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.393257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee6b70 00:23:25.512 [2024-11-06 13:53:06.394359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.394391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.401747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef1ca0 00:23:25.512 [2024-11-06 13:53:06.402732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.402763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.410144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee1f80 00:23:25.512 [2024-11-06 13:53:06.411028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.411058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.418418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eff3c8 00:23:25.512 [2024-11-06 13:53:06.419163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.419200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.426585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef5378 00:23:25.512 [2024-11-06 13:53:06.427247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.427278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:25.512 [2024-11-06 13:53:06.437502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef0788 00:23:25.512 [2024-11-06 13:53:06.438895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.512 [2024-11-06 13:53:06.438924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:25.771 [2024-11-06 13:53:06.445999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efef90 00:23:25.771 [2024-11-06 13:53:06.447242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.771 [2024-11-06 13:53:06.447273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:25.771 [2024-11-06 13:53:06.454315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eee190 00:23:25.771 [2024-11-06 13:53:06.455476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.771 [2024-11-06 13:53:06.455506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:25.771 [2024-11-06 13:53:06.462545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee0a68 00:23:25.772 [2024-11-06 13:53:06.463578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.463608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.470803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee73e0 00:23:25.772 [2024-11-06 13:53:06.471736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.471766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.479086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee3498 00:23:25.772 [2024-11-06 13:53:06.479874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.479904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.487536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeff18 00:23:25.772 [2024-11-06 13:53:06.488201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.488233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.497914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeaef0 00:23:25.772 [2024-11-06 13:53:06.498739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.498771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.506194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf118 00:23:25.772 [2024-11-06 13:53:06.506865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.506906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.514414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef1430 00:23:25.772 [2024-11-06 13:53:06.515015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.515046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.522587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee1b48 00:23:25.772 [2024-11-06 13:53:06.523038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.523069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.532855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef2948 00:23:25.772 [2024-11-06 13:53:06.533893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.533925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.541373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efb480 00:23:25.772 [2024-11-06 13:53:06.542301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.542334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.549846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee9168 00:23:25.772 [2024-11-06 13:53:06.550643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.550674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.558361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef9b30 00:23:25.772 [2024-11-06 13:53:06.559055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.559087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.566830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef92c0 00:23:25.772 [2024-11-06 13:53:06.567403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.567433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.578047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee01f8 00:23:25.772 [2024-11-06 13:53:06.579361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.579390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.586500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee8088 00:23:25.772 [2024-11-06 13:53:06.587710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.587739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.594959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6cc8 00:23:25.772 [2024-11-06 13:53:06.596040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.596073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.603433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efb8b8 00:23:25.772 [2024-11-06 13:53:06.604390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.604422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.611706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee3d08 00:23:25.772 [2024-11-06 13:53:06.612549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.612580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.620094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef8e88 00:23:25.772 [2024-11-06 13:53:06.620823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.620855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.628374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eff3c8 00:23:25.772 [2024-11-06 13:53:06.629009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.629039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.639085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efe720 00:23:25.772 [2024-11-06 13:53:06.640196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.640227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.647464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef0350 00:23:25.772 [2024-11-06 13:53:06.648444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.648474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.655819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee4de8 00:23:25.772 [2024-11-06 13:53:06.656663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.656694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.664205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef2948 00:23:25.772 [2024-11-06 13:53:06.664974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.665006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.672521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeaef0 00:23:25.772 [2024-11-06 13:53:06.673154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.673184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.683648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee7818 00:23:25.772 [2024-11-06 13:53:06.684999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.685028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.692092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eea248 00:23:25.772 [2024-11-06 13:53:06.693332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.772 [2024-11-06 13:53:06.693363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:25.772 [2024-11-06 13:53:06.700507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef4b08 00:23:25.773 [2024-11-06 13:53:06.701617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.773 [2024-11-06 13:53:06.701647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.709021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efc128 00:23:26.032 [2024-11-06 13:53:06.710017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.710047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.717409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eff3c8 00:23:26.032 [2024-11-06 13:53:06.718299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.718329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.725801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efdeb0 00:23:26.032 [2024-11-06 13:53:06.726605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.726636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.734212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eedd58 00:23:26.032 [2024-11-06 13:53:06.734865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.734900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.744617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef4f40 00:23:26.032 [2024-11-06 13:53:06.745414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.745446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.753213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efc560 00:23:26.032 [2024-11-06 13:53:06.754194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.754228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.761786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee5ec8 00:23:26.032 [2024-11-06 13:53:06.762846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.762882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.769981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef2510 00:23:26.032 [2024-11-06 13:53:06.770754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.770787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.778652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef9f68 00:23:26.032 [2024-11-06 13:53:06.779474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.779503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.789221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef5378 00:23:26.032 [2024-11-06 13:53:06.790535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.032 [2024-11-06 13:53:06.790566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.032 [2024-11-06 13:53:06.795527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef8618 00:23:26.032 [2024-11-06 13:53:06.796108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.796138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.805963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeea00 00:23:26.033 [2024-11-06 13:53:06.807073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.807104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.814906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee38d0 00:23:26.033 [2024-11-06 13:53:06.816010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.816041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.821981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef8e88 00:23:26.033 [2024-11-06 13:53:06.822585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.822615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.832279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee6738 00:23:26.033 [2024-11-06 13:53:06.833050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.833081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.840610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edfdc0 00:23:26.033 [2024-11-06 13:53:06.841257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.841288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.848982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeaef0 00:23:26.033 [2024-11-06 13:53:06.849492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.849524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.858943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eef6a8 00:23:26.033 [2024-11-06 13:53:06.860053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.860084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.867304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee23b8 00:23:26.033 [2024-11-06 13:53:06.868295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.868326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.875674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef7970 00:23:26.033 [2024-11-06 13:53:06.876546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.876576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.884094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eeaab8 00:23:26.033 [2024-11-06 13:53:06.884840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.884877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.894547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efc998 00:23:26.033 [2024-11-06 13:53:06.895924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.895954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.900914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edf988 00:23:26.033 [2024-11-06 13:53:06.901552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.901580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.911645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef92c0 00:23:26.033 [2024-11-06 13:53:06.912780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.912812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.919980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef9b30 00:23:26.033 [2024-11-06 13:53:06.920832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.920873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.928712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6020 00:23:26.033 [2024-11-06 13:53:06.929636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.929666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.937802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef5378 00:23:26.033 [2024-11-06 13:53:06.938720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.938750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.946309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef5378 00:23:26.033 [2024-11-06 13:53:06.947130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.947161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.954777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee6738 00:23:26.033 [2024-11-06 13:53:06.955481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.955511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.033 [2024-11-06 13:53:06.963373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efdeb0 00:23:26.033 [2024-11-06 13:53:06.963948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.033 [2024-11-06 13:53:06.963979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:06.973920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee3498 00:23:26.293 [2024-11-06 13:53:06.974649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:06.974681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:06.981982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6cc8 00:23:26.293 [2024-11-06 13:53:06.982803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:06.982833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:06.990892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef0788 00:23:26.293 [2024-11-06 13:53:06.991347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:06.991377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.001080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef2d80 00:23:26.293 [2024-11-06 13:53:07.002155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.002186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.009548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef6020 00:23:26.293 [2024-11-06 13:53:07.010521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.010553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.018013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee73e0 00:23:26.293 [2024-11-06 13:53:07.018847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.018884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.026480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eec408 00:23:26.293 [2024-11-06 13:53:07.027226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.027257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.034924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee7c50 00:23:26.293 [2024-11-06 13:53:07.035519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.035550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.046237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef20d8 00:23:26.293 [2024-11-06 13:53:07.047571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.047601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.054686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efa3a0 00:23:26.293 [2024-11-06 13:53:07.055939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.055967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.063103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef8e88 00:23:26.293 [2024-11-06 13:53:07.064218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.064248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.071683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef1430 00:23:26.293 [2024-11-06 13:53:07.072675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.072704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.080076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efb480 00:23:26.293 [2024-11-06 13:53:07.080936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.080967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.088545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ede8a8 00:23:26.293 [2024-11-06 13:53:07.089316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.089341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.098804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016efb048 00:23:26.293 [2024-11-06 13:53:07.100072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.100104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.106953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016eefae0 00:23:26.293 [2024-11-06 13:53:07.107937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.107968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.115530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ef7538 00:23:26.293 [2024-11-06 13:53:07.116550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.116580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.123609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016edfdc0 00:23:26.293 [2024-11-06 13:53:07.124363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.124397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.293 [2024-11-06 13:53:07.132135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ede8a8 00:23:26.293 [2024-11-06 13:53:07.132934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.293 [2024-11-06 13:53:07.132964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.294 [2024-11-06 13:53:07.142484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1ced0) with pdu=0x200016ee4de8 00:23:26.294 [2024-11-06 13:53:07.144297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.294 [2024-11-06 13:53:07.144326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.294 28801.00 IOPS, 112.50 MiB/s 00:23:26.294 Latency(us) 00:23:26.294 [2024-11-06T13:53:07.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.294 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:26.294 nvme0n1 : 2.00 28825.29 112.60 0.00 0.00 4434.84 1934.50 11791.22 00:23:26.294 [2024-11-06T13:53:07.227Z] =================================================================================================================== 00:23:26.294 [2024-11-06T13:53:07.227Z] Total : 28825.29 112.60 0.00 0.00 4434.84 1934.50 11791.22 00:23:26.294 { 00:23:26.294 "results": [ 00:23:26.294 { 00:23:26.294 "job": "nvme0n1", 00:23:26.294 "core_mask": "0x2", 00:23:26.294 "workload": "randwrite", 00:23:26.294 "status": "finished", 00:23:26.294 "queue_depth": 128, 00:23:26.294 "io_size": 4096, 00:23:26.294 "runtime": 2.004941, 00:23:26.294 "iops": 28825.287128149906, 00:23:26.294 "mibps": 112.59877784433557, 00:23:26.294 "io_failed": 0, 00:23:26.294 "io_timeout": 0, 00:23:26.294 "avg_latency_us": 4434.8443451934845, 00:23:26.294 "min_latency_us": 1934.4963855421686, 00:23:26.294 "max_latency_us": 11791.216064257029 00:23:26.294 } 00:23:26.294 ], 00:23:26.294 "core_count": 1 00:23:26.294 } 00:23:26.294 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:26.294 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:26.294 | .driver_specific 00:23:26.294 | .nvme_error 00:23:26.294 | .status_code 00:23:26.294 | .command_transient_transport_error' 00:23:26.294 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:26.294 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 226 > 0 )) 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95175 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 95175 ']' 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 95175 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95175 00:23:26.552 killing process with pid 95175 00:23:26.552 Received shutdown signal, test time was about 2.000000 seconds 00:23:26.552 00:23:26.552 Latency(us) 00:23:26.552 [2024-11-06T13:53:07.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.552 [2024-11-06T13:53:07.485Z] =================================================================================================================== 00:23:26.552 [2024-11-06T13:53:07.485Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95175' 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 95175 00:23:26.552 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 95175 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95260 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95260 /var/tmp/bperf.sock 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 95260 ']' 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:27.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:27.120 13:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:27.120 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:27.120 Zero copy mechanism will not be used. 00:23:27.120 [2024-11-06 13:53:07.799145] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:27.120 [2024-11-06 13:53:07.799248] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95260 ] 00:23:27.120 [2024-11-06 13:53:07.952599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.120 [2024-11-06 13:53:08.020137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:28.056 13:53:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:28.315 nvme0n1 00:23:28.315 13:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:28.315 13:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.315 13:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:28.315 13:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.315 13:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:28.315 13:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:28.575 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:28.575 Zero copy mechanism will not be used. 00:23:28.575 Running I/O for 2 seconds... 00:23:28.575 [2024-11-06 13:53:09.330458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.331007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.331046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.334791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.335349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.335387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.339138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.339695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.339731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.343513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.344060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.344095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.347808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.348342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.348377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.352090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.352607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.352641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.356309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.356804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.356838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.360521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.361061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.361094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.364717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.365248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.365281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.368959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.369491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.369522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.373143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.373671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.373703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.377392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.377920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.377953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.381597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.382104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.382137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.385788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.386293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.386326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.389961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.390495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.390528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.394246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.394770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.394803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.398475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.399017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.575 [2024-11-06 13:53:09.399051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.575 [2024-11-06 13:53:09.402711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.575 [2024-11-06 13:53:09.403239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.403273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.406928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.407448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.407481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.411161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.411669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.411702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.415396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.415936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.415968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.419639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.420169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.420202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.423881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.424392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.424424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.428083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.428595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.428627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.432291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.432783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.432816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.436460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.436996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.437028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.440674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.441211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.441244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.444977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.445486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.445518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.449199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.449700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.449733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.453401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.453947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.453980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.457655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.458192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.458224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.461922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.462445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.462477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.466139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.466669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.466701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.470396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.470898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.470928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.474569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.475101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.475133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.478737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.479270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.479303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.483007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.483520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.483553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.487252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.487792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.487825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.491522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.492071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.492103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.495696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.496223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.496255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.499928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.500442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.500475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.576 [2024-11-06 13:53:09.504193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.576 [2024-11-06 13:53:09.504698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.576 [2024-11-06 13:53:09.504731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.508383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.508930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.508962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.512622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.513149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.513181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.516814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.517352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.517384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.521019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.521546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.521578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.525284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.525787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.525819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.529467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.530007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.530039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.533726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.534241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.534274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.537970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.538492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.538525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.542215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.542738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.542772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.546420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.546958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.546990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.550614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.551157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.551189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.554906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.555434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.555465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.559082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.559586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.559618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.563240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.563788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.563820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.567576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.568110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.568142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.571807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.572351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.572383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.576098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.576610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.576642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.580375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.580923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.841 [2024-11-06 13:53:09.580951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.841 [2024-11-06 13:53:09.584653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.841 [2024-11-06 13:53:09.585207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.585239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.588947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.589474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.589508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.593186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.593701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.593735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.597456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.597992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.598025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.601548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.602045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.602079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.605621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.606163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.606197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.609848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.610392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.610426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.614089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.614605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.614638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.618333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.618840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.618885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.622578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.623093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.623127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.626803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.627332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.627365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.631069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.631615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.631648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.635383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.635923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.635975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.639663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.640175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.640207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.643808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.644348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.644380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.648023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.648542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.648574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.652233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.652750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.652783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.656472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.656993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.657025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.660692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.661197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.661229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.664878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.665363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.665396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.669056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.669587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.669620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.673288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.673805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.673839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.677482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.678006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.678039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.842 [2024-11-06 13:53:09.681630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.842 [2024-11-06 13:53:09.682142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.842 [2024-11-06 13:53:09.682175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.685842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.686374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.686406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.690156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.690684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.690717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.694394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.694926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.694957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.698629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.699130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.699161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.702827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.703341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.703373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.707031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.707620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.707653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.711387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.711925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.711959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.715574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.716103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.716135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.719803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.720318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.720351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.724011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.724547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.724580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.728329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.728854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.728895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.732528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.733052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.733085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.736785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.737310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.737343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.741050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.741554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.741587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.745249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.745782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.745816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.749490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.750017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.750050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.753702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.754218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.754251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.757850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.758384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.843 [2024-11-06 13:53:09.758418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.843 [2024-11-06 13:53:09.762059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.843 [2024-11-06 13:53:09.762565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.844 [2024-11-06 13:53:09.762599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.844 [2024-11-06 13:53:09.766242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:28.844 [2024-11-06 13:53:09.766764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.844 [2024-11-06 13:53:09.766798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.770504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.771039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.771072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.774728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.775261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.775293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.778994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.779498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.779530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.783244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.783778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.783810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.787519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.788057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.788090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.791757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.792293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.792325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.795974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.796491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.796523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.800201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.800717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.800749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.804430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.804943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.804976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.808644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.809148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.809180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.812887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.813403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.813436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.817098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.817607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.817640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.821297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.821793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.821826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.825420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.825964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.825996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.829686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.830218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.830250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.833908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.834431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.834464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.838166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.838681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.838713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.842412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.842935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.842967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.846654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.847202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.847235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.850913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.851456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.851487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.855143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.855695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.855728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.859413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.859940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.859972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.863602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.864107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.105 [2024-11-06 13:53:09.864140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.105 [2024-11-06 13:53:09.867712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.105 [2024-11-06 13:53:09.868239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.868271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.872014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.872534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.872566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.876237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.876749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.876783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.880399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.880940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.880973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.884666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.885211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.885244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.888980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.889504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.889536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.893216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.893743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.893776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.897444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.897988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.898020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.901621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.902145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.902177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.905853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.906373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.906405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.910076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.910568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.910599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.914276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.914814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.914847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.918543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.919083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.919118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.922767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.923283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.923316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.926972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.927500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.927533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.931226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.931755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.931786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.935430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.935937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.935970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.939551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.940080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.940112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.943724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.944231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.944262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.947880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.948370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.948402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.952105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.952630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.952663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.956350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.956870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.956901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.960508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.961020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.961052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.964686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.965216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.965248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.968901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.969419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.969451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.973124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.973618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.973650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.977295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.977823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.977856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.106 [2024-11-06 13:53:09.981529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.106 [2024-11-06 13:53:09.982055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.106 [2024-11-06 13:53:09.982087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:09.985702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:09.986218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:09.986249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:09.989963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:09.990451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:09.990481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:09.994124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:09.994647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:09.994674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:09.998384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:09.998916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:09.998941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:10.002618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:10.003181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:10.003222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:10.007132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:10.007673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:10.007707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:10.011342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:10.011870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:10.011903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:10.015552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:10.016061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:10.016093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:10.019718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:10.020261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:10.020293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:10.023962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:10.024498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:10.024531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:10.028248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:10.028771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:10.028804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.107 [2024-11-06 13:53:10.032483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.107 [2024-11-06 13:53:10.033018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.107 [2024-11-06 13:53:10.033051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.367 [2024-11-06 13:53:10.036622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.367 [2024-11-06 13:53:10.037118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.367 [2024-11-06 13:53:10.037149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.367 [2024-11-06 13:53:10.040783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.367 [2024-11-06 13:53:10.041322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.367 [2024-11-06 13:53:10.041354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.367 [2024-11-06 13:53:10.045025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.367 [2024-11-06 13:53:10.045572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.367 [2024-11-06 13:53:10.045605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.367 [2024-11-06 13:53:10.049303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.367 [2024-11-06 13:53:10.049928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.367 [2024-11-06 13:53:10.049958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.367 [2024-11-06 13:53:10.053982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.367 [2024-11-06 13:53:10.054488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.367 [2024-11-06 13:53:10.054521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.367 [2024-11-06 13:53:10.058159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.367 [2024-11-06 13:53:10.058692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.367 [2024-11-06 13:53:10.058724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.367 [2024-11-06 13:53:10.062392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.367 [2024-11-06 13:53:10.062936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.367 [2024-11-06 13:53:10.062968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.367 [2024-11-06 13:53:10.066606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.367 [2024-11-06 13:53:10.067141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.067173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.070857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.071419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.071451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.075154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.075713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.075746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.079351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.079900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.079932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.083610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.084136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.084169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.087839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.088387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.088419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.092141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.092667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.092700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.096467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.097003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.097037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.100731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.101257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.101290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.104965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.105504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.105537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.109192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.109714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.109748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.113394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.113930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.113963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.117617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.118157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.118190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.121895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.122413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.122445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.126133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.126661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.126694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.130376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.130881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.130913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.134512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.135061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.135093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.138705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.139257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.139289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.142964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.143510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.143542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.147242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.147758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.147790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.151399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.151895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.151923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.155552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.156084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.156117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.159742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.160276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.160308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.163928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.164449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.164482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.168171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.168685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.168718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.172372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.172895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.172928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.176580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.177110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.177143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.368 [2024-11-06 13:53:10.180793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.368 [2024-11-06 13:53:10.181334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.368 [2024-11-06 13:53:10.181367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.185060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.185590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.185623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.189343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.189845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.189886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.193524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.194045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.194077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.197706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.198247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.198280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.201942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.202470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.202503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.206172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.206704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.206737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.210355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.210897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.210930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.214574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.215112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.215144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.218782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.219317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.219350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.223052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.223579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.223612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.227320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.227828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.227869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.231584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.232093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.232125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.235731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.236226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.236259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.239885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.240363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.240396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.244059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.244597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.244628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.248290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.248822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.248854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.252542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.253081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.253113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.256727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.257261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.257293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.260969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.261494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.261527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.265196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.265688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.265721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.269402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.269947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.269979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.273683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.274257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.277935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.278453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.278487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.282115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.282618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.282654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.286352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.286899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.286932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.290510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.291051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.291085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.294753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.369 [2024-11-06 13:53:10.295281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.369 [2024-11-06 13:53:10.295314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.369 [2024-11-06 13:53:10.299006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.630 [2024-11-06 13:53:10.299516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.630 [2024-11-06 13:53:10.299548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.630 [2024-11-06 13:53:10.303125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.630 [2024-11-06 13:53:10.303700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.630 [2024-11-06 13:53:10.303732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.307448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.307980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.308012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.311612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.312114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.312146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.315738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.317473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.317505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.631 7254.00 IOPS, 906.75 MiB/s [2024-11-06T13:53:10.564Z] [2024-11-06 13:53:10.321215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.321733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.321766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.325424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.325942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.325974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.329585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.330089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.330121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.333720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.334273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.334302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.337962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.338484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.338517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.342173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.342691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.342724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.346272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.346772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.346805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.350464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.350991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.351024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.354641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.355170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.355210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.358785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.359327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.359358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.362986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.363503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.363536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.367184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.367738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.367770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.371452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.371980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.372012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.375689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.376204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.376237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.379969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.380477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.380510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.384202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.384700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.384734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.388369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.388852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.388894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.392542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.393100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.393132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.396783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.397334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.397366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.400984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.401514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.401546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.405208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.405745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.409408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.409916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.409946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.413600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.631 [2024-11-06 13:53:10.414135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.631 [2024-11-06 13:53:10.414168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.631 [2024-11-06 13:53:10.417870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.418399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.418431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.422131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.422656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.422689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.426351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.426890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.426922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.430586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.431118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.431149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.434817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.435356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.435388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.439065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.439603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.439635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.443424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.443966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.443997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.447622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.448137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.448169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.451784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.452285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.452317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.455988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.456511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.456543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.460233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.460749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.460781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.464488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.465019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.465051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.468727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.469246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.469280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.472983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.473526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.473558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.477217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.477746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.477779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.481444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.481984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.482016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.485628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.486153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.486186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.489815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.490352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.490385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.494054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.494600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.494633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.498252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.498787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.498820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.502489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.503005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.503039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.506669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.507216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.507248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.510895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.511426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.511457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.515070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.515610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.515642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.519346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.519858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.519897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.523552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.524060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.524092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.527733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.528261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.632 [2024-11-06 13:53:10.528293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.632 [2024-11-06 13:53:10.531959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.632 [2024-11-06 13:53:10.532466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.633 [2024-11-06 13:53:10.532498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.633 [2024-11-06 13:53:10.536185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.633 [2024-11-06 13:53:10.536679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.633 [2024-11-06 13:53:10.536712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.633 [2024-11-06 13:53:10.540406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.633 [2024-11-06 13:53:10.540943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.633 [2024-11-06 13:53:10.540990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.633 [2024-11-06 13:53:10.544650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.633 [2024-11-06 13:53:10.545175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.633 [2024-11-06 13:53:10.545207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.633 [2024-11-06 13:53:10.548839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.633 [2024-11-06 13:53:10.549369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.633 [2024-11-06 13:53:10.549401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.633 [2024-11-06 13:53:10.553080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.633 [2024-11-06 13:53:10.553584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.633 [2024-11-06 13:53:10.553616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.633 [2024-11-06 13:53:10.557290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.633 [2024-11-06 13:53:10.557783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.633 [2024-11-06 13:53:10.557815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.633 [2024-11-06 13:53:10.561463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.633 [2024-11-06 13:53:10.561998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.633 [2024-11-06 13:53:10.562031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.893 [2024-11-06 13:53:10.565690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.566222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.566255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.569914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.570424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.570456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.574085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.574595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.574628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.578370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.578930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.578962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.582637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.583168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.583208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.586877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.587422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.587454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.591095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.591641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.591674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.595314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.595799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.595831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.599427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.599951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.599982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.603698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.604221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.604252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.607899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.608411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.608443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.612219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.612747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.612780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.616415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.616949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.616981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.620594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.621119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.621150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.624790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.625327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.625358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.629075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.629613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.629645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.633279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.633820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.633853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.637515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.638067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.638100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.641728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.642279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.642312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.645946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.646508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.646543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.650237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.650787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.650820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.654504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.655048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.655080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.658788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.659345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.659378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.663017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.663566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.663598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.667301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.667831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.667872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.671571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.672105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.672137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.894 [2024-11-06 13:53:10.675789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.894 [2024-11-06 13:53:10.676332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.894 [2024-11-06 13:53:10.676364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.680039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.680545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.680577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.684201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.684734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.684765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.688395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.688933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.688966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.692608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.693126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.693159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.696845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.697354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.697386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.701083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.701628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.701661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.705347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.705916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.705947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.709582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.710120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.710152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.713753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.714292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.714325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.717966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.718502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.718534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.722169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.722697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.722730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.726367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.726920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.726952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.730573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.731111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.731143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.734773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.735321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.735353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.738982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.739500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.739532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.743216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.743738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.743770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.747356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.747839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.747879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.751555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.752091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.752125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.755827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.756370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.756403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.760101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.760630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.760662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.764314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.764842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.764884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.768584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.769131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.769164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.772875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.773398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.773432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.777100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.777636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.777669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.781321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.781861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.781903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.785536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.786052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.786084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.895 [2024-11-06 13:53:10.789635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.895 [2024-11-06 13:53:10.790142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.895 [2024-11-06 13:53:10.790176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.896 [2024-11-06 13:53:10.793791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.896 [2024-11-06 13:53:10.794346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.896 [2024-11-06 13:53:10.794380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.896 [2024-11-06 13:53:10.797929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.896 [2024-11-06 13:53:10.798467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.896 [2024-11-06 13:53:10.798499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.896 [2024-11-06 13:53:10.802187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.896 [2024-11-06 13:53:10.802740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.896 [2024-11-06 13:53:10.802774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.896 [2024-11-06 13:53:10.806438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.896 [2024-11-06 13:53:10.806991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.896 [2024-11-06 13:53:10.807023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.896 [2024-11-06 13:53:10.810632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.896 [2024-11-06 13:53:10.811133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.896 [2024-11-06 13:53:10.811165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.896 [2024-11-06 13:53:10.814718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.896 [2024-11-06 13:53:10.815279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.896 [2024-11-06 13:53:10.815312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.896 [2024-11-06 13:53:10.818908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.896 [2024-11-06 13:53:10.819446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.896 [2024-11-06 13:53:10.819478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.896 [2024-11-06 13:53:10.823155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:29.896 [2024-11-06 13:53:10.823680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.896 [2024-11-06 13:53:10.823714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.827337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.827818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.827850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.831544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.832083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.832116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.835727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.836248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.836281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.839995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.840501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.840534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.844171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.844652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.844686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.848364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.848910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.848940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.852565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.853090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.853137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.856786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.857307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.857341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.861030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.861524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.861556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.865283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.865817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.865851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.869502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.870020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.870053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.873670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.874208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.874241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.877980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.878504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.878537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.882158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.882672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.882706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.886357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.886906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.886939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.891455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.892074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.892107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.895762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.896289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.896322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.899964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.900477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.900510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.904149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.904640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.904673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.908370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.908918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.908951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.912632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.913163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.913196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.916920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.917463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.917495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.921114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.921629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.921662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.925246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.157 [2024-11-06 13:53:10.925751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.157 [2024-11-06 13:53:10.925784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.157 [2024-11-06 13:53:10.929431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.929987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.930020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.933652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.934202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.934235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.937835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.938373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.938406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.942038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.942529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.942562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.946254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.946800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.946833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.950520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.951081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.951113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.954796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.955365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.955397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.959059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.959590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.959622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.963341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.963873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.963913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.967537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.968065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.968097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.971755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.972303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.972335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.975997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.976529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.976560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.980251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.980780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.980813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.984525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.985045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.985078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.988797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.989304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.989337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.993019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.993523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.993556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:10.997245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:10.997771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:10.997805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.001471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.001990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.002024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.005676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.006211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.006244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.009875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.010411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.010443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.014094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.014644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.014677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.018366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.018900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.018933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.022620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.023120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.023152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.026820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.027387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.027420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.031003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.031560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.031593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.035278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.035798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.035831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.039545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.158 [2024-11-06 13:53:11.040059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.158 [2024-11-06 13:53:11.040091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.158 [2024-11-06 13:53:11.043731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.044247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.044279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.047930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.048419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.048451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.052215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.052730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.052763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.056454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.056982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.057014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.060648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.061161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.061208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.064825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.065363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.065397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.069050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.069580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.069612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.073266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.073786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.073819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.077484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.078015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.078047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.081712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.082227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.082260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.159 [2024-11-06 13:53:11.085881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.159 [2024-11-06 13:53:11.086372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.159 [2024-11-06 13:53:11.086405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.419 [2024-11-06 13:53:11.090121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.090650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.090683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.094371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.094879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.094912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.098476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.099011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.099043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.102718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.103248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.103280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.106973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.107505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.107537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.111250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.111764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.111796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.115449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.115967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.116000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.119644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.120174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.120206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.123868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.124397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.124429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.128044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.128554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.128586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.132224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.132727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.132760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.136386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.136957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.136989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.140645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.141202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.141234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.144872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.145424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.145456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.149152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.149700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.149732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.153373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.153916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.153948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.157487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.158027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.158059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.161660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.162189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.162221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.165932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.166447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.166480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.170093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.170600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.170632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.174213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.174754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.174786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.178441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.178957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.178989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.182581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.183094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.183126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.186776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.187317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.187350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.190917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.191452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.191484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.195104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.195650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.420 [2024-11-06 13:53:11.195683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.420 [2024-11-06 13:53:11.199313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.420 [2024-11-06 13:53:11.199826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.199869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.203506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.204023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.204055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.207648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.208169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.208201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.211861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.212381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.212415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.216059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.216571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.216603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.220301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.220800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.220833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.224491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.225045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.225076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.228703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.229246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.229279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.232983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.233539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.233572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.237240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.237761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.237794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.241415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.241934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.241966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.245593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.246139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.246171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.249778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.250320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.250353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.254042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.254562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.254594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.258225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.258746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.258780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.262343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.262897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.262929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.266575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.267107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.267140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.270762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.271298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.271333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.274970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.275495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.275528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.279181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.279702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.279734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.283353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.283872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.283914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.287534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.288055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.288087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.291678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.292195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.292227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.295834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.296357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.296390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.299977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.300471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.300504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.304141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.304647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.304680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.308361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.308905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.421 [2024-11-06 13:53:11.308946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.421 [2024-11-06 13:53:11.312625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb1d210) with pdu=0x200016efef90 00:23:30.421 [2024-11-06 13:53:11.313158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.422 [2024-11-06 13:53:11.313190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.422 7290.00 IOPS, 911.25 MiB/s 00:23:30.422 Latency(us) 00:23:30.422 [2024-11-06T13:53:11.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.422 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:30.422 nvme0n1 : 2.00 7288.61 911.08 0.00 0.00 2191.83 1493.64 11264.82 00:23:30.422 [2024-11-06T13:53:11.355Z] =================================================================================================================== 00:23:30.422 [2024-11-06T13:53:11.355Z] Total : 7288.61 911.08 0.00 0.00 2191.83 1493.64 11264.82 00:23:30.422 { 00:23:30.422 "results": [ 00:23:30.422 { 00:23:30.422 "job": "nvme0n1", 00:23:30.422 "core_mask": "0x2", 00:23:30.422 "workload": "randwrite", 00:23:30.422 "status": "finished", 00:23:30.422 "queue_depth": 16, 00:23:30.422 "io_size": 131072, 00:23:30.422 "runtime": 2.003262, 00:23:30.422 "iops": 7288.612273382114, 00:23:30.422 "mibps": 911.0765341727642, 00:23:30.422 "io_failed": 0, 00:23:30.422 "io_timeout": 0, 00:23:30.422 "avg_latency_us": 2191.8252777427083, 00:23:30.422 "min_latency_us": 1493.641767068273, 00:23:30.422 "max_latency_us": 11264.822489959839 00:23:30.422 } 00:23:30.422 ], 00:23:30.422 "core_count": 1 00:23:30.422 } 00:23:30.422 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:30.422 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:30.422 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:30.422 | .driver_specific 00:23:30.422 | .nvme_error 00:23:30.422 | .status_code 00:23:30.422 | .command_transient_transport_error' 00:23:30.422 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:30.680 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 470 > 0 )) 00:23:30.680 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95260 00:23:30.680 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 95260 ']' 00:23:30.680 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 95260 00:23:30.680 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:23:30.680 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.680 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95260 00:23:30.939 killing process with pid 95260 00:23:30.939 Received shutdown signal, test time was about 2.000000 seconds 00:23:30.939 00:23:30.939 Latency(us) 00:23:30.939 [2024-11-06T13:53:11.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.939 [2024-11-06T13:53:11.872Z] =================================================================================================================== 00:23:30.939 [2024-11-06T13:53:11.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.939 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:30.939 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:30.939 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95260' 00:23:30.939 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 95260 00:23:30.939 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 95260 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94945 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 94945 ']' 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 94945 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94945 00:23:31.198 killing process with pid 94945 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94945' 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 94945 00:23:31.198 13:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 94945 00:23:31.458 ************************************ 00:23:31.458 END TEST nvmf_digest_error 00:23:31.458 ************************************ 00:23:31.458 00:23:31.458 real 0m18.196s 00:23:31.458 user 0m33.533s 00:23:31.458 sys 0m5.378s 00:23:31.458 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:31.458 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:31.458 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:31.458 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:31.458 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.458 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.717 rmmod nvme_tcp 00:23:31.717 rmmod nvme_fabrics 00:23:31.717 rmmod nvme_keyring 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94945 ']' 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94945 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 94945 ']' 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 94945 00:23:31.717 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (94945) - No such process 00:23:31.717 Process with pid 94945 is not found 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 94945 is not found' 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:31.717 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:23:31.976 ************************************ 00:23:31.976 END TEST nvmf_digest 00:23:31.976 ************************************ 00:23:31.976 00:23:31.976 real 0m38.295s 00:23:31.976 user 1m8.467s 00:23:31.976 sys 0m11.325s 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.976 ************************************ 00:23:31.976 START TEST nvmf_mdns_discovery 00:23:31.976 ************************************ 00:23:31.976 13:53:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:32.236 * Looking for test storage... 00:23:32.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:32.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.236 --rc genhtml_branch_coverage=1 00:23:32.236 --rc genhtml_function_coverage=1 00:23:32.236 --rc genhtml_legend=1 00:23:32.236 --rc geninfo_all_blocks=1 00:23:32.236 --rc geninfo_unexecuted_blocks=1 00:23:32.236 00:23:32.236 ' 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:32.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.236 --rc genhtml_branch_coverage=1 00:23:32.236 --rc genhtml_function_coverage=1 00:23:32.236 --rc genhtml_legend=1 00:23:32.236 --rc geninfo_all_blocks=1 00:23:32.236 --rc geninfo_unexecuted_blocks=1 00:23:32.236 00:23:32.236 ' 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:32.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.236 --rc genhtml_branch_coverage=1 00:23:32.236 --rc genhtml_function_coverage=1 00:23:32.236 --rc genhtml_legend=1 00:23:32.236 --rc geninfo_all_blocks=1 00:23:32.236 --rc geninfo_unexecuted_blocks=1 00:23:32.236 00:23:32.236 ' 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:32.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.236 --rc genhtml_branch_coverage=1 00:23:32.236 --rc genhtml_function_coverage=1 00:23:32.236 --rc genhtml_legend=1 00:23:32.236 --rc geninfo_all_blocks=1 00:23:32.236 --rc geninfo_unexecuted_blocks=1 00:23:32.236 00:23:32.236 ' 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:23:32.236 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.237 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.237 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.495 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:32.495 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:32.495 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:32.495 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:32.495 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:32.495 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:32.495 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:32.496 Cannot find device "nvmf_init_br" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:32.496 Cannot find device "nvmf_init_br2" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:32.496 Cannot find device "nvmf_tgt_br" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:32.496 Cannot find device "nvmf_tgt_br2" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:32.496 Cannot find device "nvmf_init_br" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:32.496 Cannot find device "nvmf_init_br2" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:32.496 Cannot find device "nvmf_tgt_br" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:32.496 Cannot find device "nvmf_tgt_br2" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:32.496 Cannot find device "nvmf_br" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:32.496 Cannot find device "nvmf_init_if" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:32.496 Cannot find device "nvmf_init_if2" 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:32.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:32.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:32.496 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:32.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:32.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:23:32.755 00:23:32.755 --- 10.0.0.3 ping statistics --- 00:23:32.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.755 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:32.755 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:32.755 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:23:32.755 00:23:32.755 --- 10.0.0.4 ping statistics --- 00:23:32.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.755 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:23:32.755 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:32.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:23:32.755 00:23:32.755 --- 10.0.0.1 ping statistics --- 00:23:32.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.755 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:33.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:23:33.015 00:23:33.015 --- 10.0.0.2 ping statistics --- 00:23:33.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.015 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=95605 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 95605 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # '[' -z 95605 ']' 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.015 13:53:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:33.015 [2024-11-06 13:53:13.791332] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:33.015 [2024-11-06 13:53:13.791405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.015 [2024-11-06 13:53:13.941302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.274 [2024-11-06 13:53:14.005961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.274 [2024-11-06 13:53:14.006012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.274 [2024-11-06 13:53:14.006038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.274 [2024-11-06 13:53:14.006047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.274 [2024-11-06 13:53:14.006054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.274 [2024-11-06 13:53:14.006444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.884 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:23:33.885 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.885 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.143 [2024-11-06 13:53:14.890391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.143 [2024-11-06 13:53:14.902535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.143 null0 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.143 null1 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.143 null2 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.143 null3 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95661 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95661 /tmp/host.sock 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # '[' -z 95661 ']' 00:23:34.143 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:23:34.144 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.144 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:34.144 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:34.144 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.144 13:53:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.144 [2024-11-06 13:53:15.022110] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:34.144 [2024-11-06 13:53:15.022191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95661 ] 00:23:34.402 [2024-11-06 13:53:15.175709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.402 [2024-11-06 13:53:15.241539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.339 13:53:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:35.339 13:53:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:35.339 13:53:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:35.339 13:53:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:23:35.339 13:53:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:23:35.339 13:53:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95691 00:23:35.339 13:53:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:23:35.339 13:53:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:35.339 13:53:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:35.339 Process 1060 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:35.339 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:35.339 Successfully dropped root privileges. 00:23:35.339 avahi-daemon 0.8 starting up. 00:23:35.339 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:35.339 Successfully called chroot(). 00:23:35.339 Successfully dropped remaining capabilities. 00:23:36.276 No service file found in /etc/avahi/services. 00:23:36.276 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:23:36.276 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:36.276 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:23:36.276 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:36.276 Network interface enumeration completed. 00:23:36.276 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:36.276 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:23:36.276 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:36.276 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:23:36.276 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2085621472. 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.276 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.277 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 [2024-11-06 13:53:17.344060] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 [2024-11-06 13:53:17.383002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.536 13:53:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:23:37.473 [2024-11-06 13:53:18.242611] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:37.732 [2024-11-06 13:53:18.641974] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:37.732 [2024-11-06 13:53:18.642001] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:23:37.732 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:37.732 cookie is 0 00:23:37.732 is_local: 1 00:23:37.732 our_own: 0 00:23:37.732 wide_area: 0 00:23:37.732 multicast: 1 00:23:37.732 cached: 1 00:23:37.991 [2024-11-06 13:53:18.741803] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:37.991 [2024-11-06 13:53:18.741820] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:37.991 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:37.991 cookie is 0 00:23:37.991 is_local: 1 00:23:37.991 our_own: 0 00:23:37.991 wide_area: 0 00:23:37.991 multicast: 1 00:23:37.991 cached: 1 00:23:38.926 [2024-11-06 13:53:19.641122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.926 [2024-11-06 13:53:19.641185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570710 with addr=10.0.0.4, port=8009 00:23:38.926 [2024-11-06 13:53:19.641228] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:38.926 [2024-11-06 13:53:19.641242] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:38.926 [2024-11-06 13:53:19.641251] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:23:38.926 [2024-11-06 13:53:19.746580] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:38.926 [2024-11-06 13:53:19.746606] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:38.926 [2024-11-06 13:53:19.746637] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:38.926 [2024-11-06 13:53:19.832539] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:23:39.184 [2024-11-06 13:53:19.886851] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:23:39.184 [2024-11-06 13:53:19.887661] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15a5e90:1 started. 00:23:39.184 [2024-11-06 13:53:19.889626] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:23:39.184 [2024-11-06 13:53:19.889652] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:39.184 [2024-11-06 13:53:19.895013] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15a5e90 was disconnected and freed. delete nvme_qpair. 00:23:39.751 [2024-11-06 13:53:20.639403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.751 [2024-11-06 13:53:20.639452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570350 with addr=10.0.0.4, port=8009 00:23:39.751 [2024-11-06 13:53:20.639477] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:39.751 [2024-11-06 13:53:20.639487] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:39.751 [2024-11-06 13:53:20.639496] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:23:41.129 [2024-11-06 13:53:21.637765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.129 [2024-11-06 13:53:21.637803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x158ea90 with addr=10.0.0.4, port=8009 00:23:41.129 [2024-11-06 13:53:21.637841] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:41.129 [2024-11-06 13:53:21.637850] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:41.129 [2024-11-06 13:53:21.637859] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:41.698 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:41.698 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:41.698 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.698 [2024-11-06 13:53:22.488543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:23:41.698 [2024-11-06 13:53:22.491281] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:41.698 [2024-11-06 13:53:22.491311] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.698 [2024-11-06 13:53:22.500522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:23:41.698 [2024-11-06 13:53:22.501282] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.698 13:53:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:23:41.958 [2024-11-06 13:53:22.631226] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:41.958 [2024-11-06 13:53:22.631254] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:41.958 [2024-11-06 13:53:22.642611] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:23:41.958 [2024-11-06 13:53:22.642631] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:23:41.958 [2024-11-06 13:53:22.642644] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:41.958 [2024-11-06 13:53:22.717609] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:41.958 [2024-11-06 13:53:22.728542] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:23:41.958 [2024-11-06 13:53:22.782771] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:23:41.958 [2024-11-06 13:53:22.783347] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x15a2870:1 started. 00:23:41.958 [2024-11-06 13:53:22.784857] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:23:41.958 [2024-11-06 13:53:22.784890] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:23:41.958 [2024-11-06 13:53:22.791313] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x15a2870 was disconnected and freed. delete nvme_qpair. 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:23:42.896 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:42.896 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:23:42.896 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:42.896 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:42.896 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:42.896 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:42.896 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:42.896 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:42.897 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.156 [2024-11-06 13:53:23.833578] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:43.156 [2024-11-06 13:53:23.833601] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:43.156 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:43.156 cookie is 0 00:23:43.156 is_local: 1 00:23:43.156 our_own: 0 00:23:43.156 wide_area: 0 00:23:43.156 multicast: 1 00:23:43.156 cached: 1 00:23:43.156 [2024-11-06 13:53:23.833614] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:23:43.156 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:43.157 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.157 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.157 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.157 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:43.157 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.157 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.157 [2024-11-06 13:53:23.902036] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15a3830:1 started. 00:23:43.157 [2024-11-06 13:53:23.905108] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x16d4220:1 started. 00:23:43.157 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.157 13:53:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:23:43.157 [2024-11-06 13:53:23.909887] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15a3830 was disconnected and freed. delete nvme_qpair. 00:23:43.157 [2024-11-06 13:53:23.910262] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x16d4220 was disconnected and freed. delete nvme_qpair. 00:23:43.157 [2024-11-06 13:53:24.033251] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:43.157 [2024-11-06 13:53:24.033270] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:23:43.157 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:43.157 cookie is 0 00:23:43.157 is_local: 1 00:23:43.157 our_own: 0 00:23:43.157 wide_area: 0 00:23:43.157 multicast: 1 00:23:43.157 cached: 1 00:23:43.157 [2024-11-06 13:53:24.033281] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.095 13:53:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:44.095 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.355 [2024-11-06 13:53:25.041569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:44.355 [2024-11-06 13:53:25.041730] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:44.355 [2024-11-06 13:53:25.041753] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:44.355 [2024-11-06 13:53:25.041781] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:23:44.355 [2024-11-06 13:53:25.041792] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.355 [2024-11-06 13:53:25.053493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:23:44.355 [2024-11-06 13:53:25.053714] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:44.355 [2024-11-06 13:53:25.053748] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.355 13:53:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:23:44.355 [2024-11-06 13:53:25.184589] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:23:44.355 [2024-11-06 13:53:25.184955] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:23:44.355 [2024-11-06 13:53:25.242922] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:23:44.355 [2024-11-06 13:53:25.242976] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:23:44.355 [2024-11-06 13:53:25.242986] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:44.355 [2024-11-06 13:53:25.242992] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:44.355 [2024-11-06 13:53:25.243005] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:44.355 [2024-11-06 13:53:25.243144] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:23:44.355 [2024-11-06 13:53:25.243163] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:23:44.355 [2024-11-06 13:53:25.243169] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:23:44.355 [2024-11-06 13:53:25.243174] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:44.355 [2024-11-06 13:53:25.243183] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:44.614 [2024-11-06 13:53:25.288510] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:44.614 [2024-11-06 13:53:25.288529] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:44.614 [2024-11-06 13:53:25.288560] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:23:44.614 [2024-11-06 13:53:25.288567] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:45.182 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:23:45.182 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.183 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:45.183 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.183 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.183 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:45.183 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:45.183 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.183 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.443 [2024-11-06 13:53:26.352404] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:45.443 [2024-11-06 13:53:26.352567] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:45.443 [2024-11-06 13:53:26.352771] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:23:45.443 [2024-11-06 13:53:26.352809] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:45.443 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.443 [2024-11-06 13:53:26.357226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.443 [2024-11-06 13:53:26.357262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.444 [2024-11-06 13:53:26.357275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.444 [2024-11-06 13:53:26.357284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.444 [2024-11-06 13:53:26.357295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.444 [2024-11-06 13:53:26.357304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.444 [2024-11-06 13:53:26.357314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.444 [2024-11-06 13:53:26.357323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.444 [2024-11-06 13:53:26.357333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.444 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:23:45.444 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.444 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.444 [2024-11-06 13:53:26.364403] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:45.444 [2024-11-06 13:53:26.364446] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:23:45.444 [2024-11-06 13:53:26.367174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.444 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.444 13:53:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:23:45.444 [2024-11-06 13:53:26.370546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.444 [2024-11-06 13:53:26.370573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.444 [2024-11-06 13:53:26.370584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.444 [2024-11-06 13:53:26.370593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.444 [2024-11-06 13:53:26.370603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.444 [2024-11-06 13:53:26.370612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.444 [2024-11-06 13:53:26.370622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.444 [2024-11-06 13:53:26.370631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.444 [2024-11-06 13:53:26.370640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.706 [2024-11-06 13:53:26.377174] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.706 [2024-11-06 13:53:26.377191] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.706 [2024-11-06 13:53:26.377198] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.377204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.706 [2024-11-06 13:53:26.377234] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.377306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.706 [2024-11-06 13:53:26.377322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.706 [2024-11-06 13:53:26.377333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.706 [2024-11-06 13:53:26.377348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.706 [2024-11-06 13:53:26.377375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.706 [2024-11-06 13:53:26.377385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.706 [2024-11-06 13:53:26.377397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.706 [2024-11-06 13:53:26.377406] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.706 [2024-11-06 13:53:26.377412] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.706 [2024-11-06 13:53:26.377418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.706 [2024-11-06 13:53:26.380503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.706 [2024-11-06 13:53:26.387241] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.706 [2024-11-06 13:53:26.387257] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.706 [2024-11-06 13:53:26.387262] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.387268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.706 [2024-11-06 13:53:26.387289] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.387332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.706 [2024-11-06 13:53:26.387346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.706 [2024-11-06 13:53:26.387356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.706 [2024-11-06 13:53:26.387369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.706 [2024-11-06 13:53:26.387396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.706 [2024-11-06 13:53:26.387405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.706 [2024-11-06 13:53:26.387415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.706 [2024-11-06 13:53:26.387422] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.706 [2024-11-06 13:53:26.387428] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.706 [2024-11-06 13:53:26.387433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.706 [2024-11-06 13:53:26.390494] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.706 [2024-11-06 13:53:26.390510] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.706 [2024-11-06 13:53:26.390516] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.390521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.706 [2024-11-06 13:53:26.390558] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.390599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.706 [2024-11-06 13:53:26.390613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.706 [2024-11-06 13:53:26.390623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.706 [2024-11-06 13:53:26.390635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.706 [2024-11-06 13:53:26.390648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.706 [2024-11-06 13:53:26.390656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.706 [2024-11-06 13:53:26.390666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.706 [2024-11-06 13:53:26.390674] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.706 [2024-11-06 13:53:26.390679] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.706 [2024-11-06 13:53:26.390684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.706 [2024-11-06 13:53:26.397282] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.706 [2024-11-06 13:53:26.397298] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.706 [2024-11-06 13:53:26.397304] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.397309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.706 [2024-11-06 13:53:26.397328] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.397381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.706 [2024-11-06 13:53:26.397394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.706 [2024-11-06 13:53:26.397404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.706 [2024-11-06 13:53:26.397417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.706 [2024-11-06 13:53:26.397444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.706 [2024-11-06 13:53:26.397454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.706 [2024-11-06 13:53:26.397463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.706 [2024-11-06 13:53:26.397471] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.706 [2024-11-06 13:53:26.397476] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.706 [2024-11-06 13:53:26.397481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.706 [2024-11-06 13:53:26.400550] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.706 [2024-11-06 13:53:26.400566] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.706 [2024-11-06 13:53:26.400572] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.400577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.706 [2024-11-06 13:53:26.400613] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.400649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.706 [2024-11-06 13:53:26.400662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.706 [2024-11-06 13:53:26.400672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.706 [2024-11-06 13:53:26.400684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.706 [2024-11-06 13:53:26.400697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.706 [2024-11-06 13:53:26.400707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.706 [2024-11-06 13:53:26.400716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.706 [2024-11-06 13:53:26.400723] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.706 [2024-11-06 13:53:26.400729] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.706 [2024-11-06 13:53:26.400734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.706 [2024-11-06 13:53:26.407321] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.706 [2024-11-06 13:53:26.407337] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.706 [2024-11-06 13:53:26.407343] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.407348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.706 [2024-11-06 13:53:26.407369] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.706 [2024-11-06 13:53:26.407406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.707 [2024-11-06 13:53:26.407420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.707 [2024-11-06 13:53:26.407429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.707 [2024-11-06 13:53:26.407442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.707 [2024-11-06 13:53:26.407467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.707 [2024-11-06 13:53:26.407476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.707 [2024-11-06 13:53:26.407486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.707 [2024-11-06 13:53:26.407493] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.707 [2024-11-06 13:53:26.407499] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.707 [2024-11-06 13:53:26.407504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.707 [2024-11-06 13:53:26.410604] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.707 [2024-11-06 13:53:26.410624] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.707 [2024-11-06 13:53:26.410630] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.410635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.707 [2024-11-06 13:53:26.410656] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.410714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.707 [2024-11-06 13:53:26.410728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.707 [2024-11-06 13:53:26.410737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.707 [2024-11-06 13:53:26.410751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.707 [2024-11-06 13:53:26.410763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.707 [2024-11-06 13:53:26.410772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.707 [2024-11-06 13:53:26.410782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.707 [2024-11-06 13:53:26.410789] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.707 [2024-11-06 13:53:26.410795] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.707 [2024-11-06 13:53:26.410800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.707 [2024-11-06 13:53:26.417360] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.707 [2024-11-06 13:53:26.417377] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.707 [2024-11-06 13:53:26.417383] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.417388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.707 [2024-11-06 13:53:26.417424] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.417463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.707 [2024-11-06 13:53:26.417476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.707 [2024-11-06 13:53:26.417486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.707 [2024-11-06 13:53:26.417512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.707 [2024-11-06 13:53:26.417525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.707 [2024-11-06 13:53:26.417533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.707 [2024-11-06 13:53:26.417543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.707 [2024-11-06 13:53:26.417550] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.707 [2024-11-06 13:53:26.417556] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.707 [2024-11-06 13:53:26.417561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.707 [2024-11-06 13:53:26.420648] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.707 [2024-11-06 13:53:26.420667] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.707 [2024-11-06 13:53:26.420672] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.420677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.707 [2024-11-06 13:53:26.420699] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.420759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.707 [2024-11-06 13:53:26.420774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.707 [2024-11-06 13:53:26.420784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.707 [2024-11-06 13:53:26.420796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.707 [2024-11-06 13:53:26.420809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.707 [2024-11-06 13:53:26.420818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.707 [2024-11-06 13:53:26.420827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.707 [2024-11-06 13:53:26.420835] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.707 [2024-11-06 13:53:26.420841] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.707 [2024-11-06 13:53:26.420846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.707 [2024-11-06 13:53:26.427416] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.707 [2024-11-06 13:53:26.427433] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.707 [2024-11-06 13:53:26.427439] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.427444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.707 [2024-11-06 13:53:26.427464] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.427517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.707 [2024-11-06 13:53:26.427531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.707 [2024-11-06 13:53:26.427540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.707 [2024-11-06 13:53:26.427553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.707 [2024-11-06 13:53:26.427565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.707 [2024-11-06 13:53:26.427574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.707 [2024-11-06 13:53:26.427583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.707 [2024-11-06 13:53:26.427591] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.707 [2024-11-06 13:53:26.427596] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.707 [2024-11-06 13:53:26.427601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.707 [2024-11-06 13:53:26.430690] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.707 [2024-11-06 13:53:26.430706] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.707 [2024-11-06 13:53:26.430712] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.430717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.707 [2024-11-06 13:53:26.430735] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.430789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.707 [2024-11-06 13:53:26.430802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.707 [2024-11-06 13:53:26.430812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.707 [2024-11-06 13:53:26.430824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.707 [2024-11-06 13:53:26.430837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.707 [2024-11-06 13:53:26.430846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.707 [2024-11-06 13:53:26.430855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.707 [2024-11-06 13:53:26.430863] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.707 [2024-11-06 13:53:26.430868] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.707 [2024-11-06 13:53:26.430874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.707 [2024-11-06 13:53:26.437455] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.707 [2024-11-06 13:53:26.437471] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.707 [2024-11-06 13:53:26.437477] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.707 [2024-11-06 13:53:26.437482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.707 [2024-11-06 13:53:26.437503] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.437541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.708 [2024-11-06 13:53:26.437554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.708 [2024-11-06 13:53:26.437563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.708 [2024-11-06 13:53:26.437576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.708 [2024-11-06 13:53:26.437588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.708 [2024-11-06 13:53:26.437597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.708 [2024-11-06 13:53:26.437606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.708 [2024-11-06 13:53:26.437613] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.708 [2024-11-06 13:53:26.437619] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.708 [2024-11-06 13:53:26.437624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.708 [2024-11-06 13:53:26.440728] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.708 [2024-11-06 13:53:26.440743] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.708 [2024-11-06 13:53:26.440749] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.440754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.708 [2024-11-06 13:53:26.440773] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.440824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.708 [2024-11-06 13:53:26.440838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.708 [2024-11-06 13:53:26.440847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.708 [2024-11-06 13:53:26.440859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.708 [2024-11-06 13:53:26.440872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.708 [2024-11-06 13:53:26.440889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.708 [2024-11-06 13:53:26.440898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.708 [2024-11-06 13:53:26.440906] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.708 [2024-11-06 13:53:26.440912] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.708 [2024-11-06 13:53:26.440917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.708 [2024-11-06 13:53:26.447495] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.708 [2024-11-06 13:53:26.447512] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.708 [2024-11-06 13:53:26.447517] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.447523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.708 [2024-11-06 13:53:26.447543] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.447580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.708 [2024-11-06 13:53:26.447593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.708 [2024-11-06 13:53:26.447602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.708 [2024-11-06 13:53:26.447615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.708 [2024-11-06 13:53:26.447627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.708 [2024-11-06 13:53:26.447636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.708 [2024-11-06 13:53:26.447647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.708 [2024-11-06 13:53:26.447654] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.708 [2024-11-06 13:53:26.447660] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.708 [2024-11-06 13:53:26.447665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.708 [2024-11-06 13:53:26.450766] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.708 [2024-11-06 13:53:26.450784] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.708 [2024-11-06 13:53:26.450790] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.450795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.708 [2024-11-06 13:53:26.450816] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.450855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.708 [2024-11-06 13:53:26.450880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.708 [2024-11-06 13:53:26.450889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.708 [2024-11-06 13:53:26.450902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.708 [2024-11-06 13:53:26.450915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.708 [2024-11-06 13:53:26.450924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.708 [2024-11-06 13:53:26.450933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.708 [2024-11-06 13:53:26.450940] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.708 [2024-11-06 13:53:26.450946] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.708 [2024-11-06 13:53:26.450951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.708 [2024-11-06 13:53:26.457537] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.708 [2024-11-06 13:53:26.457558] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.708 [2024-11-06 13:53:26.457564] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.457569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.708 [2024-11-06 13:53:26.457608] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.457651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.708 [2024-11-06 13:53:26.457665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.708 [2024-11-06 13:53:26.457674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.708 [2024-11-06 13:53:26.457687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.708 [2024-11-06 13:53:26.457700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.708 [2024-11-06 13:53:26.457709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.708 [2024-11-06 13:53:26.457718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.708 [2024-11-06 13:53:26.457725] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.708 [2024-11-06 13:53:26.457731] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.708 [2024-11-06 13:53:26.457736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.708 [2024-11-06 13:53:26.460807] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.708 [2024-11-06 13:53:26.460822] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.708 [2024-11-06 13:53:26.460828] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.460833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.708 [2024-11-06 13:53:26.460875] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.460913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.708 [2024-11-06 13:53:26.460927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.708 [2024-11-06 13:53:26.460936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.708 [2024-11-06 13:53:26.460948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.708 [2024-11-06 13:53:26.460969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.708 [2024-11-06 13:53:26.460978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.708 [2024-11-06 13:53:26.460987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.708 [2024-11-06 13:53:26.460995] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.708 [2024-11-06 13:53:26.461000] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.708 [2024-11-06 13:53:26.461005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.708 [2024-11-06 13:53:26.467600] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.708 [2024-11-06 13:53:26.467617] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.708 [2024-11-06 13:53:26.467622] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.708 [2024-11-06 13:53:26.467628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.709 [2024-11-06 13:53:26.467648] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.467685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.709 [2024-11-06 13:53:26.467698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.709 [2024-11-06 13:53:26.467708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.709 [2024-11-06 13:53:26.467720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.709 [2024-11-06 13:53:26.467732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.709 [2024-11-06 13:53:26.467741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.709 [2024-11-06 13:53:26.467750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.709 [2024-11-06 13:53:26.467758] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.709 [2024-11-06 13:53:26.467763] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.709 [2024-11-06 13:53:26.467769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.709 [2024-11-06 13:53:26.470866] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.709 [2024-11-06 13:53:26.470887] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.709 [2024-11-06 13:53:26.470893] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.470898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.709 [2024-11-06 13:53:26.470934] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.470970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.709 [2024-11-06 13:53:26.470983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.709 [2024-11-06 13:53:26.470992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.709 [2024-11-06 13:53:26.471004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.709 [2024-11-06 13:53:26.471017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.709 [2024-11-06 13:53:26.471025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.709 [2024-11-06 13:53:26.471035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.709 [2024-11-06 13:53:26.471042] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.709 [2024-11-06 13:53:26.471048] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.709 [2024-11-06 13:53:26.471053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.709 [2024-11-06 13:53:26.477639] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.709 [2024-11-06 13:53:26.477654] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.709 [2024-11-06 13:53:26.477660] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.477665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.709 [2024-11-06 13:53:26.477700] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.477735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.709 [2024-11-06 13:53:26.477748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.709 [2024-11-06 13:53:26.477757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.709 [2024-11-06 13:53:26.477769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.709 [2024-11-06 13:53:26.477781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.709 [2024-11-06 13:53:26.477789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.709 [2024-11-06 13:53:26.477798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.709 [2024-11-06 13:53:26.477806] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.709 [2024-11-06 13:53:26.477811] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.709 [2024-11-06 13:53:26.477816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.709 [2024-11-06 13:53:26.480932] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.709 [2024-11-06 13:53:26.480948] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.709 [2024-11-06 13:53:26.480954] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.480959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.709 [2024-11-06 13:53:26.480980] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.481015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.709 [2024-11-06 13:53:26.481028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.709 [2024-11-06 13:53:26.481038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.709 [2024-11-06 13:53:26.481049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.709 [2024-11-06 13:53:26.481061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.709 [2024-11-06 13:53:26.481070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.709 [2024-11-06 13:53:26.481079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.709 [2024-11-06 13:53:26.481086] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.709 [2024-11-06 13:53:26.481092] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.709 [2024-11-06 13:53:26.481097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.709 [2024-11-06 13:53:26.487692] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:45.709 [2024-11-06 13:53:26.487708] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:45.709 [2024-11-06 13:53:26.487714] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.487719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.709 [2024-11-06 13:53:26.487754] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.487791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.709 [2024-11-06 13:53:26.487804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15827f0 with addr=10.0.0.3, port=4420 00:23:45.709 [2024-11-06 13:53:26.487814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15827f0 is same with the state(6) to be set 00:23:45.709 [2024-11-06 13:53:26.487826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15827f0 (9): Bad file descriptor 00:23:45.709 [2024-11-06 13:53:26.487838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.709 [2024-11-06 13:53:26.487848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.709 [2024-11-06 13:53:26.487857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.709 [2024-11-06 13:53:26.487872] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.709 [2024-11-06 13:53:26.487878] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.709 [2024-11-06 13:53:26.487884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.709 [2024-11-06 13:53:26.490972] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:23:45.709 [2024-11-06 13:53:26.490987] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:23:45.709 [2024-11-06 13:53:26.490993] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.490998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:23:45.709 [2024-11-06 13:53:26.491016] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:23:45.709 [2024-11-06 13:53:26.491067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.709 [2024-11-06 13:53:26.491080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15904b0 with addr=10.0.0.4, port=4420 00:23:45.709 [2024-11-06 13:53:26.491089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15904b0 is same with the state(6) to be set 00:23:45.709 [2024-11-06 13:53:26.491101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15904b0 (9): Bad file descriptor 00:23:45.709 [2024-11-06 13:53:26.491113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:23:45.709 [2024-11-06 13:53:26.491122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:23:45.709 [2024-11-06 13:53:26.491131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:23:45.709 [2024-11-06 13:53:26.491139] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:23:45.709 [2024-11-06 13:53:26.491144] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:23:45.709 [2024-11-06 13:53:26.491150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:23:45.709 [2024-11-06 13:53:26.494396] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:23:45.709 [2024-11-06 13:53:26.494419] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:45.710 [2024-11-06 13:53:26.494450] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:45.710 [2024-11-06 13:53:26.495413] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:23:45.710 [2024-11-06 13:53:26.495434] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:45.710 [2024-11-06 13:53:26.495447] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:45.710 [2024-11-06 13:53:26.580328] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:45.710 [2024-11-06 13:53:26.581322] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.907 13:53:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:23:46.907 [2024-11-06 13:53:27.727273] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:47.843 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.102 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.103 [2024-11-06 13:53:28.870607] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:48.103 2024/11/06 13:53:28 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:48.103 request: 00:23:48.103 { 00:23:48.103 "method": "bdev_nvme_start_mdns_discovery", 00:23:48.103 "params": { 00:23:48.103 "name": "mdns", 00:23:48.103 "svcname": "_nvme-disc._http", 00:23:48.103 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:48.103 } 00:23:48.103 } 00:23:48.103 Got JSON-RPC error response 00:23:48.103 GoRPCClient: error on JSON-RPC call 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:48.103 13:53:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:23:48.671 [2024-11-06 13:53:29.454406] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:48.671 [2024-11-06 13:53:29.554239] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:48.929 [2024-11-06 13:53:29.654085] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:48.929 [2024-11-06 13:53:29.654100] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:23:48.929 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:48.929 cookie is 0 00:23:48.929 is_local: 1 00:23:48.929 our_own: 0 00:23:48.929 wide_area: 0 00:23:48.929 multicast: 1 00:23:48.929 cached: 1 00:23:48.929 [2024-11-06 13:53:29.753923] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:48.929 [2024-11-06 13:53:29.753939] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:23:48.929 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:48.929 cookie is 0 00:23:48.929 is_local: 1 00:23:48.929 our_own: 0 00:23:48.929 wide_area: 0 00:23:48.929 multicast: 1 00:23:48.929 cached: 1 00:23:48.929 [2024-11-06 13:53:29.753950] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:23:48.929 [2024-11-06 13:53:29.853761] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:23:48.929 [2024-11-06 13:53:29.853777] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:48.929 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:48.929 cookie is 0 00:23:48.929 is_local: 1 00:23:48.929 our_own: 0 00:23:48.929 wide_area: 0 00:23:48.929 multicast: 1 00:23:48.929 cached: 1 00:23:49.188 [2024-11-06 13:53:29.953599] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:23:49.188 [2024-11-06 13:53:29.953614] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:49.188 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:49.188 cookie is 0 00:23:49.188 is_local: 1 00:23:49.188 our_own: 0 00:23:49.188 wide_area: 0 00:23:49.188 multicast: 1 00:23:49.188 cached: 1 00:23:49.188 [2024-11-06 13:53:29.953622] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:23:49.756 [2024-11-06 13:53:30.661013] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:23:49.756 [2024-11-06 13:53:30.661038] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:23:49.756 [2024-11-06 13:53:30.661053] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:23:50.015 [2024-11-06 13:53:30.746953] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:23:50.015 [2024-11-06 13:53:30.805227] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:23:50.015 [2024-11-06 13:53:30.805787] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x156c590:1 started. 00:23:50.015 [2024-11-06 13:53:30.807483] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:23:50.015 [2024-11-06 13:53:30.807509] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:23:50.015 [2024-11-06 13:53:30.809642] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x156c590 was disconnected and freed. delete nvme_qpair. 00:23:50.015 [2024-11-06 13:53:30.860334] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:50.015 [2024-11-06 13:53:30.860355] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:50.015 [2024-11-06 13:53:30.860368] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:50.015 [2024-11-06 13:53:30.946277] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:23:50.274 [2024-11-06 13:53:31.004462] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:23:50.274 [2024-11-06 13:53:31.004931] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x16dceb0:1 started. 00:23:50.274 [2024-11-06 13:53:31.006221] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:23:50.274 [2024-11-06 13:53:31.006245] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:50.274 [2024-11-06 13:53:31.009308] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x16dceb0 was disconnected and freed. delete nvme_qpair. 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.563 13:53:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.563 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.564 [2024-11-06 13:53:34.076191] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:53.564 2024/11/06 13:53:34 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:53.564 request: 00:23:53.564 { 00:23:53.564 "method": "bdev_nvme_start_mdns_discovery", 00:23:53.564 "params": { 00:23:53.564 "name": "cdc", 00:23:53.564 "svcname": "_nvme-disc._tcp", 00:23:53.564 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:53.564 } 00:23:53.564 } 00:23:53.564 Got JSON-RPC error response 00:23:53.564 GoRPCClient: error on JSON-RPC call 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:23:53.564 [2024-11-06 13:53:34.246652] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:23:53.564 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:53.564 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:23:53.564 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:53.564 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:53.564 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:53.564 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:53.564 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.564 13:53:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:23:54.500 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:23:54.500 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:23:54.500 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:23:54.500 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:23:54.500 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:23:54.500 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:23:54.500 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:23:54.500 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:54.500 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:23:54.500 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:23:54.501 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95661 00:23:54.501 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95661 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95691 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:23:54.759 Got SIGTERM, quitting. 00:23:54.759 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:23:54.759 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:23:54.759 avahi-daemon 0.8 exiting. 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.759 rmmod nvme_tcp 00:23:54.759 rmmod nvme_fabrics 00:23:54.759 rmmod nvme_keyring 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 95605 ']' 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 95605 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # '[' -z 95605 ']' 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # kill -0 95605 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@957 -- # uname 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95605 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:54.759 killing process with pid 95605 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95605' 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@971 -- # kill 95605 00:23:54.759 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@976 -- # wait 95605 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:55.018 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:55.276 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:55.276 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.276 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:55.276 13:53:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.276 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:23:55.536 00:23:55.536 real 0m23.326s 00:23:55.536 user 0m43.503s 00:23:55.536 sys 0m3.529s 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.536 ************************************ 00:23:55.536 END TEST nvmf_mdns_discovery 00:23:55.536 ************************************ 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.536 ************************************ 00:23:55.536 START TEST nvmf_host_multipath 00:23:55.536 ************************************ 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:55.536 * Looking for test storage... 00:23:55.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:23:55.536 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:23:55.796 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:55.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.797 --rc genhtml_branch_coverage=1 00:23:55.797 --rc genhtml_function_coverage=1 00:23:55.797 --rc genhtml_legend=1 00:23:55.797 --rc geninfo_all_blocks=1 00:23:55.797 --rc geninfo_unexecuted_blocks=1 00:23:55.797 00:23:55.797 ' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:55.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.797 --rc genhtml_branch_coverage=1 00:23:55.797 --rc genhtml_function_coverage=1 00:23:55.797 --rc genhtml_legend=1 00:23:55.797 --rc geninfo_all_blocks=1 00:23:55.797 --rc geninfo_unexecuted_blocks=1 00:23:55.797 00:23:55.797 ' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:55.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.797 --rc genhtml_branch_coverage=1 00:23:55.797 --rc genhtml_function_coverage=1 00:23:55.797 --rc genhtml_legend=1 00:23:55.797 --rc geninfo_all_blocks=1 00:23:55.797 --rc geninfo_unexecuted_blocks=1 00:23:55.797 00:23:55.797 ' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:55.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.797 --rc genhtml_branch_coverage=1 00:23:55.797 --rc genhtml_function_coverage=1 00:23:55.797 --rc genhtml_legend=1 00:23:55.797 --rc geninfo_all_blocks=1 00:23:55.797 --rc geninfo_unexecuted_blocks=1 00:23:55.797 00:23:55.797 ' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.797 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.797 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:55.798 Cannot find device "nvmf_init_br" 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:55.798 Cannot find device "nvmf_init_br2" 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:55.798 Cannot find device "nvmf_tgt_br" 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.798 Cannot find device "nvmf_tgt_br2" 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:55.798 Cannot find device "nvmf_init_br" 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:55.798 Cannot find device "nvmf_init_br2" 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:55.798 Cannot find device "nvmf_tgt_br" 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:23:55.798 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:56.057 Cannot find device "nvmf_tgt_br2" 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:56.057 Cannot find device "nvmf_br" 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:56.057 Cannot find device "nvmf_init_if" 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:56.057 Cannot find device "nvmf_init_if2" 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:56.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:56.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:56.057 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:56.317 13:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:56.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:56.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:23:56.317 00:23:56.317 --- 10.0.0.3 ping statistics --- 00:23:56.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.317 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:56.317 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:56.317 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:23:56.317 00:23:56.317 --- 10.0.0.4 ping statistics --- 00:23:56.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.317 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:56.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:23:56.317 00:23:56.317 --- 10.0.0.1 ping statistics --- 00:23:56.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.317 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:56.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:56.317 00:23:56.317 --- 10.0.0.2 ping statistics --- 00:23:56.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.317 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=96345 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 96345 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 96345 ']' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.317 13:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:56.317 [2024-11-06 13:53:37.150919] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:23:56.317 [2024-11-06 13:53:37.151000] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.576 [2024-11-06 13:53:37.304725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:56.576 [2024-11-06 13:53:37.347707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.576 [2024-11-06 13:53:37.347761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.576 [2024-11-06 13:53:37.347772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.576 [2024-11-06 13:53:37.347782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.576 [2024-11-06 13:53:37.347790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.576 [2024-11-06 13:53:37.348665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.576 [2024-11-06 13:53:37.348679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.144 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.144 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:23:57.144 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.144 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.144 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:57.402 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.402 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96345 00:23:57.402 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:57.402 [2024-11-06 13:53:38.315070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.661 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:57.661 Malloc0 00:23:57.661 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:57.920 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.179 13:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:58.438 [2024-11-06 13:53:39.188505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:58.438 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:58.697 [2024-11-06 13:53:39.396744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96443 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96443 /var/tmp/bdevperf.sock 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 96443 ']' 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:58.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:58.697 13:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:59.631 13:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:59.631 13:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:23:59.631 13:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:59.632 13:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:00.199 Nvme0n1 00:24:00.199 13:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:00.457 Nvme0n1 00:24:00.458 13:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:00.458 13:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:24:01.393 13:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:01.393 13:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:01.652 13:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:01.911 13:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:01.911 13:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96530 00:24:01.911 13:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96345 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:01.911 13:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.517 Attaching 4 probes... 00:24:08.517 @path[10.0.0.3, 4421]: 21036 00:24:08.517 @path[10.0.0.3, 4421]: 21695 00:24:08.517 @path[10.0.0.3, 4421]: 21601 00:24:08.517 @path[10.0.0.3, 4421]: 21571 00:24:08.517 @path[10.0.0.3, 4421]: 21750 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96530 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:08.517 13:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:08.517 13:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:24:08.517 13:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:08.517 13:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96345 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:08.517 13:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96663 00:24:08.517 13:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:15.086 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:15.086 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:15.086 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:24:15.086 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:15.086 Attaching 4 probes... 00:24:15.086 @path[10.0.0.3, 4420]: 19508 00:24:15.086 @path[10.0.0.3, 4420]: 19795 00:24:15.086 @path[10.0.0.3, 4420]: 20092 00:24:15.086 @path[10.0.0.3, 4420]: 20093 00:24:15.086 @path[10.0.0.3, 4420]: 19980 00:24:15.086 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96663 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:15.087 13:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:15.346 13:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:15.346 13:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96795 00:24:15.346 13:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96345 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:15.346 13:53:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:21.916 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:21.916 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:21.916 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:21.916 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:21.916 Attaching 4 probes... 00:24:21.916 @path[10.0.0.3, 4421]: 15582 00:24:21.916 @path[10.0.0.3, 4421]: 21396 00:24:21.916 @path[10.0.0.3, 4421]: 21369 00:24:21.916 @path[10.0.0.3, 4421]: 21121 00:24:21.916 @path[10.0.0.3, 4421]: 21046 00:24:21.916 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:21.916 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96795 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96345 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96933 00:24:21.917 13:54:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:28.487 Attaching 4 probes... 00:24:28.487 00:24:28.487 00:24:28.487 00:24:28.487 00:24:28.487 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96933 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:28.487 13:54:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:28.487 13:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:28.487 13:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:28.487 13:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96345 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:28.487 13:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97059 00:24:28.487 13:54:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:35.057 Attaching 4 probes... 00:24:35.057 @path[10.0.0.3, 4421]: 20647 00:24:35.057 @path[10.0.0.3, 4421]: 21082 00:24:35.057 @path[10.0.0.3, 4421]: 21187 00:24:35.057 @path[10.0.0.3, 4421]: 21076 00:24:35.057 @path[10.0.0.3, 4421]: 21201 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97059 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:35.057 [2024-11-06 13:54:15.867842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195eb0 is same with the state(6) to be set 00:24:35.057 [2024-11-06 13:54:15.867891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195eb0 is same with the state(6) to be set 00:24:35.057 [2024-11-06 13:54:15.867902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195eb0 is same with the state(6) to be set 00:24:35.057 [2024-11-06 13:54:15.867912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195eb0 is same with the state(6) to be set 00:24:35.057 [2024-11-06 13:54:15.867921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195eb0 is same with the state(6) to be set 00:24:35.057 [2024-11-06 13:54:15.867929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195eb0 is same with the state(6) to be set 00:24:35.057 13:54:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:24:35.994 13:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:35.994 13:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97200 00:24:35.994 13:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:35.994 13:54:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96345 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:42.565 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:42.565 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:42.565 Attaching 4 probes... 00:24:42.565 @path[10.0.0.3, 4420]: 19673 00:24:42.565 @path[10.0.0.3, 4420]: 20213 00:24:42.565 @path[10.0.0.3, 4420]: 20123 00:24:42.565 @path[10.0.0.3, 4420]: 19041 00:24:42.565 @path[10.0.0.3, 4420]: 18840 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97200 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:42.565 [2024-11-06 13:54:23.333460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:42.565 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:42.824 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:24:49.390 13:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:49.390 13:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97397 00:24:49.390 13:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:49.390 13:54:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96345 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:55.982 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:55.983 Attaching 4 probes... 00:24:55.983 @path[10.0.0.3, 4421]: 19634 00:24:55.983 @path[10.0.0.3, 4421]: 19893 00:24:55.983 @path[10.0.0.3, 4421]: 20207 00:24:55.983 @path[10.0.0.3, 4421]: 20136 00:24:55.983 @path[10.0.0.3, 4421]: 19999 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97397 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96443 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 96443 ']' 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 96443 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96443 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:55.983 killing process with pid 96443 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96443' 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 96443 00:24:55.983 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 96443 00:24:55.983 { 00:24:55.983 "results": [ 00:24:55.983 { 00:24:55.983 "job": "Nvme0n1", 00:24:55.983 "core_mask": "0x4", 00:24:55.983 "workload": "verify", 00:24:55.983 "status": "terminated", 00:24:55.983 "verify_range": { 00:24:55.983 "start": 0, 00:24:55.983 "length": 16384 00:24:55.983 }, 00:24:55.983 "queue_depth": 128, 00:24:55.983 "io_size": 4096, 00:24:55.983 "runtime": 54.693547, 00:24:55.983 "iops": 8704.079843276575, 00:24:55.983 "mibps": 34.00031188779912, 00:24:55.983 "io_failed": 0, 00:24:55.983 "io_timeout": 0, 00:24:55.983 "avg_latency_us": 14691.00280834549, 00:24:55.983 "min_latency_us": 457.3044176706827, 00:24:55.983 "max_latency_us": 7061253.963052209 00:24:55.983 } 00:24:55.983 ], 00:24:55.983 "core_count": 1 00:24:55.983 } 00:24:55.983 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96443 00:24:55.983 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:55.983 [2024-11-06 13:53:39.456532] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:24:55.983 [2024-11-06 13:53:39.456619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96443 ] 00:24:55.983 [2024-11-06 13:53:39.605987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.983 [2024-11-06 13:53:39.674830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.983 Running I/O for 90 seconds... 00:24:55.983 10494.00 IOPS, 40.99 MiB/s [2024-11-06T13:54:36.916Z] 10698.00 IOPS, 41.79 MiB/s [2024-11-06T13:54:36.916Z] 10717.33 IOPS, 41.86 MiB/s [2024-11-06T13:54:36.916Z] 10749.75 IOPS, 41.99 MiB/s [2024-11-06T13:54:36.916Z] 10760.20 IOPS, 42.03 MiB/s [2024-11-06T13:54:36.916Z] 10765.00 IOPS, 42.05 MiB/s [2024-11-06T13:54:36.916Z] 10771.43 IOPS, 42.08 MiB/s [2024-11-06T13:54:36.916Z] 10757.00 IOPS, 42.02 MiB/s [2024-11-06T13:54:36.916Z] [2024-11-06 13:53:49.312783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.312842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.312903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.312921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.312940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.312954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.312973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.312987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.313381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.313394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.314025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.314063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.314086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.314102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.314120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.983 [2024-11-06 13:53:49.314134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.983 [2024-11-06 13:53:49.314153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.984 [2024-11-06 13:53:49.314851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.314979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.314992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.984 [2024-11-06 13:53:49.315353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.984 [2024-11-06 13:53:49.315366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.315948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.315961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.316969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.316987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.317000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.317018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.317031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.317049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.317062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.317080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.985 [2024-11-06 13:53:49.317094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.985 [2024-11-06 13:53:49.317112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.986 [2024-11-06 13:53:49.317810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.317842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.317884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.317916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.317947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.317980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.317998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.986 [2024-11-06 13:53:49.318289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.986 [2024-11-06 13:53:49.318303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.986 10650.67 IOPS, 41.60 MiB/s [2024-11-06T13:54:36.919Z] 10566.30 IOPS, 41.27 MiB/s [2024-11-06T13:54:36.919Z] 10513.64 IOPS, 41.07 MiB/s [2024-11-06T13:54:36.920Z] 10471.25 IOPS, 40.90 MiB/s [2024-11-06T13:54:36.920Z] 10443.69 IOPS, 40.80 MiB/s [2024-11-06T13:54:36.920Z] 10405.64 IOPS, 40.65 MiB/s [2024-11-06T13:54:36.920Z] [2024-11-06 13:53:55.780063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.987 [2024-11-06 13:53:55.780132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.780977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.780995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.781008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.987 [2024-11-06 13:53:55.781026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.987 [2024-11-06 13:53:55.781039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.781441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.781454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.783967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.783986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.784018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.784049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.784080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.784111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.784142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.784172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.784205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.988 [2024-11-06 13:53:55.784237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.988 [2024-11-06 13:53:55.784250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.784268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.784281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.784299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-11-06 13:53:55.784312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-11-06 13:53:55.785087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.989 [2024-11-06 13:53:55.785137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.785981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.785994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.786013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.786026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.786044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.786058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.786077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.786090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.786108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.786121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.786140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.786156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.989 [2024-11-06 13:53:55.786174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.989 [2024-11-06 13:53:55.786187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.786980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.786998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.990 [2024-11-06 13:53:55.787724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.990 [2024-11-06 13:53:55.787738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.787756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.787769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.787787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.787801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.787819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.787832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.787850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.787874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.787892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.787906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.787924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.787937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.787955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.787968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.787986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.788471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.788485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.991 [2024-11-06 13:53:55.808938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.991 [2024-11-06 13:53:55.808964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.808982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.809947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.809980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.810912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.810972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.811036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.811097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.811158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-11-06 13:53:55.811250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-11-06 13:53:55.811338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.992 [2024-11-06 13:53:55.811402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.811459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.811517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.811573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.992 [2024-11-06 13:53:55.811606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.992 [2024-11-06 13:53:55.811630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.811664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.811687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.811720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.811753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.811788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.811811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.811844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.811882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.811916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.811939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.811972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.811996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.812967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.812990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.813544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.813567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.814656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.814695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.814733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.814758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.993 [2024-11-06 13:53:55.814791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.993 [2024-11-06 13:53:55.814815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.814849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.814889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.814923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.814946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.814979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.815965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.815998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.816952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.816986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.817009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.994 [2024-11-06 13:53:55.817042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.994 [2024-11-06 13:53:55.817066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.817950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.817983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.818040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.818097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.818154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.818210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.818267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.818323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.818380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.818437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.818461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.819616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.819657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.819695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.819720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.819753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.819790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.819824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.819848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.819902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.819927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.819960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.819984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.820018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.820042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.820075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.820099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.820132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.820155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.820189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.820212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.820245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.995 [2024-11-06 13:53:55.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.995 [2024-11-06 13:53:55.820302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.820974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.820996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-11-06 13:53:55.821011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-11-06 13:53:55.821050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.996 [2024-11-06 13:53:55.821087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:55.996 [2024-11-06 13:53:55.821880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.996 [2024-11-06 13:53:55.821897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.821918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.821934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.821956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.821972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.821993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.822450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.822466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:55.997 [2024-11-06 13:53:55.823874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.997 [2024-11-06 13:53:55.823891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.823913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.823928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.823950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.823966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.823987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.824982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.824998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.825019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.825035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.825056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.825072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.825094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.825109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.825131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.825147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.825168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.825184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.825211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.825227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.825249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.825264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:55.998 [2024-11-06 13:53:55.825286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.998 [2024-11-06 13:53:55.825303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.825704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.825720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.826970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.826986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:55.999 [2024-11-06 13:53:55.827480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.999 [2024-11-06 13:53:55.827495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-11-06 13:53:55.827533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-11-06 13:53:55.827570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.000 [2024-11-06 13:53:55.827608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.827964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.827986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.000 [2024-11-06 13:53:55.828874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.000 [2024-11-06 13:53:55.828891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.828912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.828928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.828950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.828966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.829671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.829697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.829722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.829738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.829760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.829777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.829799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.829815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.829837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.829853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.829895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.829911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.829933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.829949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.829971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.829986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.001 [2024-11-06 13:53:55.830954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.001 [2024-11-06 13:53:55.830976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.830998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.831982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.831994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.832013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.832026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.832044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.832057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.832698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.002 [2024-11-06 13:53:55.832722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.002 [2024-11-06 13:53:55.832743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.832756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.832775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.832788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.832806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.832819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.832837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.832850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.832882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.832896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.832924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.832937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.832955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.832968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.832986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.832999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-11-06 13:53:55.833566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-11-06 13:53:55.833598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.003 [2024-11-06 13:53:55.833629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.003 [2024-11-06 13:53:55.833894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.003 [2024-11-06 13:53:55.833912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.833925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.833943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.833956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.833974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.833988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.834703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.834716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.004 [2024-11-06 13:53:55.835735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.004 [2024-11-06 13:53:55.835754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.835767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.835785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.835798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.835817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.835830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.835848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.835874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.835893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.835906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.835925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.835943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.835961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.835974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.835997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.005 [2024-11-06 13:53:55.836900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.005 [2024-11-06 13:53:55.836914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.836932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.836946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.836964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.836977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.836995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.837440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.837454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.006 [2024-11-06 13:53:55.838695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.006 [2024-11-06 13:53:55.838708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.838739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.838771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.838802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.838833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.838875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.838907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.838938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.838975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.838993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.007 [2024-11-06 13:53:55.839006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.007 [2024-11-06 13:53:55.839038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.007 [2024-11-06 13:53:55.839069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.007 [2024-11-06 13:53:55.839752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:56.007 [2024-11-06 13:53:55.839770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.839788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.839806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.839819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.839837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.839850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.839878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.839891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.839909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.839922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.839940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.839953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.839972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.839985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.840975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.840988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.008 [2024-11-06 13:53:55.841528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:56.008 [2024-11-06 13:53:55.841546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.841976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.841994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.009 [2024-11-06 13:53:55.842638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.009 [2024-11-06 13:53:55.842651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.842669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.842681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.842699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.842712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.842734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.842747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.842764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.842777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.842795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.842808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.843980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.843999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.010 [2024-11-06 13:53:55.844424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.010 [2024-11-06 13:53:55.844456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.010 [2024-11-06 13:53:55.844487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.010 [2024-11-06 13:53:55.844506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.010 [2024-11-06 13:53:55.844519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.844972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.844995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.845506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.011 [2024-11-06 13:53:55.845519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.011 [2024-11-06 13:53:55.846145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.846978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.846991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.012 [2024-11-06 13:53:55.847352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.012 [2024-11-06 13:53:55.847365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.847976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.847994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.848979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.848999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.849012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.849030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.849054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.849079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.849092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.849111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.849124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.849142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.849155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.013 [2024-11-06 13:53:55.849172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.013 [2024-11-06 13:53:55.849185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.014 [2024-11-06 13:53:55.849841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.014 [2024-11-06 13:53:55.849881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.014 [2024-11-06 13:53:55.849913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.849975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.849993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.014 [2024-11-06 13:53:55.850284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.014 [2024-11-06 13:53:55.850302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.850855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.850876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.851971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.851989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.852003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.852022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.852041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.852059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.852072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.015 [2024-11-06 13:53:55.852090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.015 [2024-11-06 13:53:55.852104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.852975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.852993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.853007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.853035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.853048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.853066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.016 [2024-11-06 13:53:55.853079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.016 [2024-11-06 13:53:55.853097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.853561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.853574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.017 [2024-11-06 13:53:55.854831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.017 [2024-11-06 13:53:55.854849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.854872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.854890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.854903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.854921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.854934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.854951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.854964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.854982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.854995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.018 [2024-11-06 13:53:55.855185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.018 [2024-11-06 13:53:55.855250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.018 [2024-11-06 13:53:55.855298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.855979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.855992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.856011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.856024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.856042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.856055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.856074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.018 [2024-11-06 13:53:55.856087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.018 [2024-11-06 13:53:55.856105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.856986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.856999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:56.019 [2024-11-06 13:53:55.857545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.019 [2024-11-06 13:53:55.857557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.857578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.857591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.857612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.857625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.857646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.857659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.857680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.857693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.857713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.857726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.857747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.857765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.857786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.857798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.862969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.862982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.020 [2024-11-06 13:53:55.863601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.020 [2024-11-06 13:53:55.863617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.020 10040.60 IOPS, 39.22 MiB/s [2024-11-06T13:54:36.953Z] 9699.00 IOPS, 37.89 MiB/s [2024-11-06T13:54:36.953Z] 9755.12 IOPS, 38.11 MiB/s [2024-11-06T13:54:36.953Z] 9807.44 IOPS, 38.31 MiB/s [2024-11-06T13:54:36.953Z] 9852.79 IOPS, 38.49 MiB/s [2024-11-06T13:54:36.953Z] 9884.75 IOPS, 38.61 MiB/s [2024-11-06T13:54:36.953Z] 9915.14 IOPS, 38.73 MiB/s [2024-11-06T13:54:36.954Z] [2024-11-06 13:54:02.714835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.714916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.714993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.715981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.715995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.716013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.716027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.716077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.716092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.716111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.716124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.716144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.716169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.716189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.716203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.716222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.716236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.716255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.716268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:56.021 [2024-11-06 13:54:02.716286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.021 [2024-11-06 13:54:02.716300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.716848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.716906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.716940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.716973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.716998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.717012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.717045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.717078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.717110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.717141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.022 [2024-11-06 13:54:02.717173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.717984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.717997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.022 [2024-11-06 13:54:02.718018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.022 [2024-11-06 13:54:02.718031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.718984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.718997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.023 [2024-11-06 13:54:02.719795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:56.023 [2024-11-06 13:54:02.719816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.024 [2024-11-06 13:54:02.719829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:02.719851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:02.719873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:02.719896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:02.719909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:02.719931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:02.719944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:02.719966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:02.719982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:02.720003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:02.720017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:02.720039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:02.720053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:02.720078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:02.720092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:56.024 9669.36 IOPS, 37.77 MiB/s [2024-11-06T13:54:36.957Z] 9248.96 IOPS, 36.13 MiB/s [2024-11-06T13:54:36.957Z] 8863.58 IOPS, 34.62 MiB/s [2024-11-06T13:54:36.957Z] 8509.04 IOPS, 33.24 MiB/s [2024-11-06T13:54:36.957Z] 8181.77 IOPS, 31.96 MiB/s [2024-11-06T13:54:36.957Z] 7878.74 IOPS, 30.78 MiB/s [2024-11-06T13:54:36.957Z] 7597.36 IOPS, 29.68 MiB/s [2024-11-06T13:54:36.957Z] 7532.21 IOPS, 29.42 MiB/s [2024-11-06T13:54:36.957Z] 7632.63 IOPS, 29.81 MiB/s [2024-11-06T13:54:36.957Z] 7727.03 IOPS, 30.18 MiB/s [2024-11-06T13:54:36.957Z] 7814.44 IOPS, 30.53 MiB/s [2024-11-06T13:54:36.957Z] 7898.76 IOPS, 30.85 MiB/s [2024-11-06T13:54:36.957Z] 7976.32 IOPS, 31.16 MiB/s [2024-11-06T13:54:36.957Z] [2024-11-06 13:54:15.867286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.024 [2024-11-06 13:54:15.867395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.867985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.867998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.024 [2024-11-06 13:54:15.868651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:56.024 [2024-11-06 13:54:15.868669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.868974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.868992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.869478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.869491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.870053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.870086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.870115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.870143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.870170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.870197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.870224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.025 [2024-11-06 13:54:15.870251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.025 [2024-11-06 13:54:15.870521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.025 [2024-11-06 13:54:15.870534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.870984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.870998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.026 [2024-11-06 13:54:15.871416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.026 [2024-11-06 13:54:15.871444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.026 [2024-11-06 13:54:15.871471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.026 [2024-11-06 13:54:15.871498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.026 [2024-11-06 13:54:15.871525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.026 [2024-11-06 13:54:15.871551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.026 [2024-11-06 13:54:15.871565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.871839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.871855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.872658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:56.027 [2024-11-06 13:54:15.872732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.027 [2024-11-06 13:54:15.872750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.027 [2024-11-06 13:54:15.872789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1323190 (9): Bad file descriptor 00:24:56.027 [2024-11-06 13:54:15.873106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.027 [2024-11-06 13:54:15.873136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1323190 with addr=10.0.0.3, port=4421 00:24:56.027 [2024-11-06 13:54:15.873151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1323190 is same with the state(6) to be set 00:24:56.027 [2024-11-06 13:54:15.873271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1323190 (9): Bad file descriptor 00:24:56.027 [2024-11-06 13:54:15.873378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:56.027 [2024-11-06 13:54:15.873394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:56.027 [2024-11-06 13:54:15.873409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:56.027 [2024-11-06 13:54:15.873434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:56.027 [2024-11-06 13:54:15.873455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:56.027 8038.94 IOPS, 31.40 MiB/s [2024-11-06T13:54:36.960Z] 8079.44 IOPS, 31.56 MiB/s [2024-11-06T13:54:36.960Z] 8128.73 IOPS, 31.75 MiB/s [2024-11-06T13:54:36.960Z] 8179.97 IOPS, 31.95 MiB/s [2024-11-06T13:54:36.960Z] 8228.21 IOPS, 32.14 MiB/s [2024-11-06T13:54:36.960Z] 8263.83 IOPS, 32.28 MiB/s [2024-11-06T13:54:36.960Z] 8291.02 IOPS, 32.39 MiB/s [2024-11-06T13:54:36.960Z] 8318.17 IOPS, 32.49 MiB/s [2024-11-06T13:54:36.960Z] 8341.51 IOPS, 32.58 MiB/s [2024-11-06T13:54:36.960Z] 8369.00 IOPS, 32.69 MiB/s [2024-11-06T13:54:36.960Z] [2024-11-06 13:54:25.902010] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:56.027 8408.67 IOPS, 32.85 MiB/s [2024-11-06T13:54:36.960Z] 8453.50 IOPS, 33.02 MiB/s [2024-11-06T13:54:36.960Z] 8492.09 IOPS, 33.17 MiB/s [2024-11-06T13:54:36.960Z] 8527.94 IOPS, 33.31 MiB/s [2024-11-06T13:54:36.960Z] 8555.63 IOPS, 33.42 MiB/s [2024-11-06T13:54:36.960Z] 8584.42 IOPS, 33.53 MiB/s [2024-11-06T13:54:36.960Z] 8610.47 IOPS, 33.63 MiB/s [2024-11-06T13:54:36.960Z] 8638.90 IOPS, 33.75 MiB/s [2024-11-06T13:54:36.960Z] 8666.00 IOPS, 33.85 MiB/s [2024-11-06T13:54:36.960Z] 8690.41 IOPS, 33.95 MiB/s [2024-11-06T13:54:36.960Z] Received shutdown signal, test time was about 54.694189 seconds 00:24:56.027 00:24:56.027 Latency(us) 00:24:56.027 [2024-11-06T13:54:36.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.027 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:56.027 Verification LBA range: start 0x0 length 0x4000 00:24:56.027 Nvme0n1 : 54.69 8704.08 34.00 0.00 0.00 14691.00 457.30 7061253.96 00:24:56.027 [2024-11-06T13:54:36.960Z] =================================================================================================================== 00:24:56.027 [2024-11-06T13:54:36.960Z] Total : 8704.08 34.00 0.00 0.00 14691.00 457.30 7061253.96 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.027 rmmod nvme_tcp 00:24:56.027 rmmod nvme_fabrics 00:24:56.027 rmmod nvme_keyring 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 96345 ']' 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 96345 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 96345 ']' 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 96345 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96345 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:56.027 killing process with pid 96345 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96345' 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 96345 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 96345 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:56.027 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:56.286 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:56.286 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:56.286 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:56.286 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:56.286 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:56.287 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:24:56.287 00:24:56.287 real 1m0.821s 00:24:56.287 user 2m46.008s 00:24:56.287 sys 0m18.495s 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:56.287 ************************************ 00:24:56.287 END TEST nvmf_host_multipath 00:24:56.287 ************************************ 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.287 ************************************ 00:24:56.287 START TEST nvmf_timeout 00:24:56.287 ************************************ 00:24:56.287 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:56.548 * Looking for test storage... 00:24:56.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:56.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.548 --rc genhtml_branch_coverage=1 00:24:56.548 --rc genhtml_function_coverage=1 00:24:56.548 --rc genhtml_legend=1 00:24:56.548 --rc geninfo_all_blocks=1 00:24:56.548 --rc geninfo_unexecuted_blocks=1 00:24:56.548 00:24:56.548 ' 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:56.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.548 --rc genhtml_branch_coverage=1 00:24:56.548 --rc genhtml_function_coverage=1 00:24:56.548 --rc genhtml_legend=1 00:24:56.548 --rc geninfo_all_blocks=1 00:24:56.548 --rc geninfo_unexecuted_blocks=1 00:24:56.548 00:24:56.548 ' 00:24:56.548 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:56.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.548 --rc genhtml_branch_coverage=1 00:24:56.549 --rc genhtml_function_coverage=1 00:24:56.549 --rc genhtml_legend=1 00:24:56.549 --rc geninfo_all_blocks=1 00:24:56.549 --rc geninfo_unexecuted_blocks=1 00:24:56.549 00:24:56.549 ' 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:56.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.549 --rc genhtml_branch_coverage=1 00:24:56.549 --rc genhtml_function_coverage=1 00:24:56.549 --rc genhtml_legend=1 00:24:56.549 --rc geninfo_all_blocks=1 00:24:56.549 --rc geninfo_unexecuted_blocks=1 00:24:56.549 00:24:56.549 ' 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:56.549 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:56.549 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:56.808 Cannot find device "nvmf_init_br" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:56.808 Cannot find device "nvmf_init_br2" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:56.808 Cannot find device "nvmf_tgt_br" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:56.808 Cannot find device "nvmf_tgt_br2" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:56.808 Cannot find device "nvmf_init_br" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:56.808 Cannot find device "nvmf_init_br2" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:56.808 Cannot find device "nvmf_tgt_br" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:56.808 Cannot find device "nvmf_tgt_br2" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:56.808 Cannot find device "nvmf_br" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:56.808 Cannot find device "nvmf_init_if" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:56.808 Cannot find device "nvmf_init_if2" 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:56.808 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:56.808 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:56.808 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:57.068 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:57.068 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:24:57.068 00:24:57.068 --- 10.0.0.3 ping statistics --- 00:24:57.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.068 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:57.068 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:57.068 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:24:57.068 00:24:57.068 --- 10.0.0.4 ping statistics --- 00:24:57.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.068 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:57.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:24:57.068 00:24:57.068 --- 10.0.0.1 ping statistics --- 00:24:57.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.068 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:57.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:24:57.068 00:24:57.068 --- 10.0.0.2 ping statistics --- 00:24:57.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.068 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=97780 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 97780 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97780 ']' 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:57.068 13:54:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:57.328 [2024-11-06 13:54:38.050649] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:24:57.328 [2024-11-06 13:54:38.050707] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.328 [2024-11-06 13:54:38.186210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:57.328 [2024-11-06 13:54:38.246558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.328 [2024-11-06 13:54:38.246600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.328 [2024-11-06 13:54:38.246610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.328 [2024-11-06 13:54:38.246619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.328 [2024-11-06 13:54:38.246625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.328 [2024-11-06 13:54:38.247920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.328 [2024-11-06 13:54:38.247921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.264 13:54:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:58.264 13:54:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:24:58.264 13:54:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.264 13:54:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.264 13:54:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:58.264 13:54:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.264 13:54:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.264 13:54:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:58.264 [2024-11-06 13:54:39.173336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.264 13:54:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:58.522 Malloc0 00:24:58.522 13:54:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.781 13:54:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.039 13:54:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:59.298 [2024-11-06 13:54:40.019764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97866 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97866 /var/tmp/bdevperf.sock 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97866 ']' 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:59.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:59.298 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:59.298 [2024-11-06 13:54:40.079122] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:24:59.298 [2024-11-06 13:54:40.079196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97866 ] 00:24:59.298 [2024-11-06 13:54:40.215154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.557 [2024-11-06 13:54:40.256727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.125 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:00.125 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:25:00.125 13:54:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:00.384 13:54:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:00.643 NVMe0n1 00:25:00.643 13:54:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.643 13:54:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97913 00:25:00.643 13:54:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:25:00.643 Running I/O for 10 seconds... 00:25:01.580 13:54:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:01.841 11779.00 IOPS, 46.01 MiB/s [2024-11-06T13:54:42.774Z] [2024-11-06 13:54:42.673829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.841 [2024-11-06 13:54:42.673892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.841 [2024-11-06 13:54:42.673904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.841 [2024-11-06 13:54:42.673913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.673995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51510 is same with the state(6) to be set 00:25:01.842 [2024-11-06 13:54:42.674454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.842 [2024-11-06 13:54:42.674950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.842 [2024-11-06 13:54:42.674960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.674969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.674978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.674986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.674996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.843 [2024-11-06 13:54:42.675585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.843 [2024-11-06 13:54:42.675678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.843 [2024-11-06 13:54:42.675688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.675977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.675986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.676008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.844 [2024-11-06 13:54:42.676027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.844 [2024-11-06 13:54:42.676416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.844 [2024-11-06 13:54:42.676425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.845 [2024-11-06 13:54:42.676774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.845 [2024-11-06 13:54:42.676792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.845 [2024-11-06 13:54:42.676810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.845 [2024-11-06 13:54:42.676828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.845 [2024-11-06 13:54:42.676847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.845 [2024-11-06 13:54:42.676877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.845 [2024-11-06 13:54:42.676895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.845 [2024-11-06 13:54:42.676915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.676941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.845 [2024-11-06 13:54:42.676949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.845 [2024-11-06 13:54:42.676957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105256 len:8 PRP1 0x0 PRP2 0x0 00:25:01.845 [2024-11-06 13:54:42.676965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.677083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.845 [2024-11-06 13:54:42.677096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.677106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.845 [2024-11-06 13:54:42.677115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.677125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.845 [2024-11-06 13:54:42.677134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.677144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.845 [2024-11-06 13:54:42.677152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.845 [2024-11-06 13:54:42.677161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6bf50 is same with the state(6) to be set 00:25:01.845 [2024-11-06 13:54:42.677326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:01.845 [2024-11-06 13:54:42.677344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6bf50 (9): Bad file descriptor 00:25:01.845 [2024-11-06 13:54:42.677417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.845 [2024-11-06 13:54:42.677431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6bf50 with addr=10.0.0.3, port=4420 00:25:01.845 [2024-11-06 13:54:42.677440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6bf50 is same with the state(6) to be set 00:25:01.845 [2024-11-06 13:54:42.677454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6bf50 (9): Bad file descriptor 00:25:01.845 [2024-11-06 13:54:42.677467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:01.845 [2024-11-06 13:54:42.677476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:01.845 [2024-11-06 13:54:42.677486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:01.845 [2024-11-06 13:54:42.677496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:01.845 [2024-11-06 13:54:42.677505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:01.845 13:54:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:25:03.719 6548.50 IOPS, 25.58 MiB/s [2024-11-06T13:54:44.911Z] 4365.67 IOPS, 17.05 MiB/s [2024-11-06T13:54:44.911Z] [2024-11-06 13:54:44.674475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.978 [2024-11-06 13:54:44.674520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6bf50 with addr=10.0.0.3, port=4420 00:25:03.978 [2024-11-06 13:54:44.674533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6bf50 is same with the state(6) to be set 00:25:03.978 [2024-11-06 13:54:44.674552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6bf50 (9): Bad file descriptor 00:25:03.978 [2024-11-06 13:54:44.674568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:03.978 [2024-11-06 13:54:44.674579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:03.978 [2024-11-06 13:54:44.674590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:03.978 [2024-11-06 13:54:44.674601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:03.978 [2024-11-06 13:54:44.674612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:03.978 13:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:25:03.978 13:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.978 13:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:04.236 13:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:25:04.236 13:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:25:04.236 13:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:04.236 13:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:04.236 13:54:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:25:04.236 13:54:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:25:05.737 3274.25 IOPS, 12.79 MiB/s [2024-11-06T13:54:46.928Z] 2619.40 IOPS, 10.23 MiB/s [2024-11-06T13:54:46.928Z] [2024-11-06 13:54:46.671555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.995 [2024-11-06 13:54:46.671599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6bf50 with addr=10.0.0.3, port=4420 00:25:05.995 [2024-11-06 13:54:46.671613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6bf50 is same with the state(6) to be set 00:25:05.995 [2024-11-06 13:54:46.671633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6bf50 (9): Bad file descriptor 00:25:05.995 [2024-11-06 13:54:46.671650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:05.995 [2024-11-06 13:54:46.671660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:05.995 [2024-11-06 13:54:46.671670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:05.995 [2024-11-06 13:54:46.671681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:05.995 [2024-11-06 13:54:46.671691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:07.867 2182.83 IOPS, 8.53 MiB/s [2024-11-06T13:54:48.800Z] 1871.00 IOPS, 7.31 MiB/s [2024-11-06T13:54:48.800Z] [2024-11-06 13:54:48.668560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:07.867 [2024-11-06 13:54:48.668603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:07.867 [2024-11-06 13:54:48.668613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:07.867 [2024-11-06 13:54:48.668626] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:25:07.867 [2024-11-06 13:54:48.668637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:08.803 1637.12 IOPS, 6.40 MiB/s 00:25:08.803 Latency(us) 00:25:08.803 [2024-11-06T13:54:49.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.803 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:08.803 Verification LBA range: start 0x0 length 0x4000 00:25:08.803 NVMe0n1 : 8.12 1613.47 6.30 15.77 0.00 78405.06 1519.96 7061253.96 00:25:08.803 [2024-11-06T13:54:49.736Z] =================================================================================================================== 00:25:08.803 [2024-11-06T13:54:49.736Z] Total : 1613.47 6.30 15.77 0.00 78405.06 1519.96 7061253.96 00:25:08.803 { 00:25:08.803 "results": [ 00:25:08.803 { 00:25:08.803 "job": "NVMe0n1", 00:25:08.803 "core_mask": "0x4", 00:25:08.803 "workload": "verify", 00:25:08.803 "status": "finished", 00:25:08.803 "verify_range": { 00:25:08.803 "start": 0, 00:25:08.803 "length": 16384 00:25:08.803 }, 00:25:08.803 "queue_depth": 128, 00:25:08.803 "io_size": 4096, 00:25:08.803 "runtime": 8.117301, 00:25:08.803 "iops": 1613.467333538574, 00:25:08.803 "mibps": 6.3026067716350545, 00:25:08.803 "io_failed": 128, 00:25:08.803 "io_timeout": 0, 00:25:08.803 "avg_latency_us": 78405.06312925047, 00:25:08.803 "min_latency_us": 1519.9614457831326, 00:25:08.803 "max_latency_us": 7061253.963052209 00:25:08.803 } 00:25:08.803 ], 00:25:08.803 "core_count": 1 00:25:08.803 } 00:25:09.370 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:25:09.370 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.370 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:09.627 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:09.627 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:25:09.627 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:09.627 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97913 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97866 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97866 ']' 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97866 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97866 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:09.886 killing process with pid 97866 00:25:09.886 Received shutdown signal, test time was about 9.097268 seconds 00:25:09.886 00:25:09.886 Latency(us) 00:25:09.886 [2024-11-06T13:54:50.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.886 [2024-11-06T13:54:50.819Z] =================================================================================================================== 00:25:09.886 [2024-11-06T13:54:50.819Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97866' 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97866 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97866 00:25:09.886 13:54:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:10.144 [2024-11-06 13:54:50.998541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=98072 00:25:10.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 98072 /var/tmp/bdevperf.sock 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 98072 ']' 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:10.144 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.144 [2024-11-06 13:54:51.075854] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:25:10.144 [2024-11-06 13:54:51.076077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98072 ] 00:25:10.402 [2024-11-06 13:54:51.226822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.402 [2024-11-06 13:54:51.271630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.343 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:11.343 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:25:11.343 13:54:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:11.343 13:54:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:11.647 NVMe0n1 00:25:11.647 13:54:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98120 00:25:11.647 13:54:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.647 13:54:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:25:11.647 Running I/O for 10 seconds... 00:25:12.584 13:54:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:12.846 11197.00 IOPS, 43.74 MiB/s [2024-11-06T13:54:53.779Z] [2024-11-06 13:54:53.656550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.656996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.657004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.657012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.657019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.657027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.657035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.657043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.657051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.657058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96b0 is same with the state(6) to be set 00:25:12.846 [2024-11-06 13:54:53.658488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.846 [2024-11-06 13:54:53.658528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.846 [2024-11-06 13:54:53.658545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.846 [2024-11-06 13:54:53.658556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.847 [2024-11-06 13:54:53.658690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.847 [2024-11-06 13:54:53.658708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.847 [2024-11-06 13:54:53.658726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.658983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.658993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.847 [2024-11-06 13:54:53.659315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.847 [2024-11-06 13:54:53.659323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.848 [2024-11-06 13:54:53.659744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.848 [2024-11-06 13:54:53.659763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.848 [2024-11-06 13:54:53.659780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.848 [2024-11-06 13:54:53.659798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.848 [2024-11-06 13:54:53.659816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.848 [2024-11-06 13:54:53.659833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.848 [2024-11-06 13:54:53.659851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.659983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.659995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.660004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.660013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.660021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.660031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.660039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.660048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.848 [2024-11-06 13:54:53.660057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.848 [2024-11-06 13:54:53.660066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.849 [2024-11-06 13:54:53.660438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99528 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99552 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99560 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99568 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99576 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99584 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99592 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99600 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.849 [2024-11-06 13:54:53.660783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:25:12.849 [2024-11-06 13:54:53.660791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.849 [2024-11-06 13:54:53.660800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.849 [2024-11-06 13:54:53.660806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.660813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99616 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.660822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.660831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.660837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.660844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99624 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.660853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.660870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.660877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.660884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99632 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.660892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.660902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.660909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.660916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99640 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.660925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.660934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.660941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.660948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99648 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.660956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.660966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.660973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.660980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99656 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.660988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.660997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.661003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.661010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99664 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.661019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.661027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.661033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 13:54:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:25:12.850 [2024-11-06 13:54:53.683890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99672 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.683931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.683949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.683960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.683971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99680 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.683985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.683997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.684006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.684016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99688 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.684029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.684050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.684060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99696 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.684071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.684093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.684103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99704 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.684116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.684137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.684146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99712 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.684157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.850 [2024-11-06 13:54:53.684178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.850 [2024-11-06 13:54:53.684188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99720 len:8 PRP1 0x0 PRP2 0x0 00:25:12.850 [2024-11-06 13:54:53.684199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.850 [2024-11-06 13:54:53.684369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.850 [2024-11-06 13:54:53.684395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.850 [2024-11-06 13:54:53.684419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.850 [2024-11-06 13:54:53.684443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.850 [2024-11-06 13:54:53.684454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cf50 is same with the state(6) to be set 00:25:12.850 [2024-11-06 13:54:53.684690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:12.850 [2024-11-06 13:54:53.684711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cf50 (9): Bad file descriptor 00:25:12.850 [2024-11-06 13:54:53.684800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.850 [2024-11-06 13:54:53.684819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3cf50 with addr=10.0.0.3, port=4420 00:25:12.850 [2024-11-06 13:54:53.684830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cf50 is same with the state(6) to be set 00:25:12.850 [2024-11-06 13:54:53.684848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cf50 (9): Bad file descriptor 00:25:12.850 [2024-11-06 13:54:53.684882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:12.850 [2024-11-06 13:54:53.684894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:12.850 [2024-11-06 13:54:53.684907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:12.850 [2024-11-06 13:54:53.684918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:12.850 [2024-11-06 13:54:53.684931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:13.787 6169.00 IOPS, 24.10 MiB/s [2024-11-06T13:54:54.720Z] [2024-11-06 13:54:54.683390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.787 [2024-11-06 13:54:54.683423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3cf50 with addr=10.0.0.3, port=4420 00:25:13.787 [2024-11-06 13:54:54.683435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cf50 is same with the state(6) to be set 00:25:13.788 [2024-11-06 13:54:54.683450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cf50 (9): Bad file descriptor 00:25:13.788 [2024-11-06 13:54:54.683463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:13.788 [2024-11-06 13:54:54.683472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:13.788 [2024-11-06 13:54:54.683481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:13.788 [2024-11-06 13:54:54.683489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:13.788 [2024-11-06 13:54:54.683498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:13.788 13:54:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:14.047 [2024-11-06 13:54:54.884224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:14.047 13:54:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 98120 00:25:14.981 4112.67 IOPS, 16.07 MiB/s [2024-11-06T13:54:55.914Z] [2024-11-06 13:54:55.702392] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:16.855 3084.50 IOPS, 12.05 MiB/s [2024-11-06T13:54:58.725Z] 4427.20 IOPS, 17.29 MiB/s [2024-11-06T13:54:59.663Z] 5559.00 IOPS, 21.71 MiB/s [2024-11-06T13:55:00.598Z] 6384.71 IOPS, 24.94 MiB/s [2024-11-06T13:55:01.978Z] 6994.38 IOPS, 27.32 MiB/s [2024-11-06T13:55:02.547Z] 7462.56 IOPS, 29.15 MiB/s [2024-11-06T13:55:02.807Z] 7845.80 IOPS, 30.65 MiB/s 00:25:21.874 Latency(us) 00:25:21.874 [2024-11-06T13:55:02.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.874 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:21.874 Verification LBA range: start 0x0 length 0x4000 00:25:21.874 NVMe0n1 : 10.01 7853.21 30.68 0.00 0.00 16276.41 1467.32 3045502.66 00:25:21.874 [2024-11-06T13:55:02.807Z] =================================================================================================================== 00:25:21.874 [2024-11-06T13:55:02.807Z] Total : 7853.21 30.68 0.00 0.00 16276.41 1467.32 3045502.66 00:25:21.874 { 00:25:21.874 "results": [ 00:25:21.874 { 00:25:21.874 "job": "NVMe0n1", 00:25:21.874 "core_mask": "0x4", 00:25:21.874 "workload": "verify", 00:25:21.874 "status": "finished", 00:25:21.874 "verify_range": { 00:25:21.874 "start": 0, 00:25:21.874 "length": 16384 00:25:21.874 }, 00:25:21.874 "queue_depth": 128, 00:25:21.874 "io_size": 4096, 00:25:21.874 "runtime": 10.006866, 00:25:21.874 "iops": 7853.207987395854, 00:25:21.874 "mibps": 30.676593700765054, 00:25:21.874 "io_failed": 0, 00:25:21.874 "io_timeout": 0, 00:25:21.874 "avg_latency_us": 16276.405218052369, 00:25:21.874 "min_latency_us": 1467.3220883534136, 00:25:21.874 "max_latency_us": 3045502.663453815 00:25:21.874 } 00:25:21.874 ], 00:25:21.874 "core_count": 1 00:25:21.874 } 00:25:21.874 13:55:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98246 00:25:21.874 13:55:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.874 13:55:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:25:21.874 Running I/O for 10 seconds... 00:25:22.810 13:55:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:23.072 9719.00 IOPS, 37.96 MiB/s [2024-11-06T13:55:04.005Z] [2024-11-06 13:55:03.789823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.789995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.072 [2024-11-06 13:55:03.790386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.073 [2024-11-06 13:55:03.790394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.073 [2024-11-06 13:55:03.790403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.073 [2024-11-06 13:55:03.790411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.073 [2024-11-06 13:55:03.790420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.073 [2024-11-06 13:55:03.790428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.073 [2024-11-06 13:55:03.790437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7d40 is same with the state(6) to be set 00:25:23.073 [2024-11-06 13:55:03.791510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.791989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.791998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.073 [2024-11-06 13:55:03.792220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.073 [2024-11-06 13:55:03.792228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.074 [2024-11-06 13:55:03.792724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.074 [2024-11-06 13:55:03.792938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.074 [2024-11-06 13:55:03.792947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.792956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.792963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.792973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.792982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.792992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.075 [2024-11-06 13:55:03.793588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.075 [2024-11-06 13:55:03.793618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87928 len:8 PRP1 0x0 PRP2 0x0 00:25:23.075 [2024-11-06 13:55:03.793627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.075 [2024-11-06 13:55:03.793649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.075 [2024-11-06 13:55:03.793656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87936 len:8 PRP1 0x0 PRP2 0x0 00:25:23.075 [2024-11-06 13:55:03.793664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.075 [2024-11-06 13:55:03.793682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.075 [2024-11-06 13:55:03.793689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87944 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87952 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87960 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87968 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87976 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87984 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87992 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88000 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88008 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.793970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.793977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88016 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.793985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.793997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.794004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.794011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.794018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.794026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.794032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.794038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88032 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.794046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.794055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.076 [2024-11-06 13:55:03.794061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.076 [2024-11-06 13:55:03.794068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88040 len:8 PRP1 0x0 PRP2 0x0 00:25:23.076 [2024-11-06 13:55:03.794077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.076 [2024-11-06 13:55:03.794285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:23.076 [2024-11-06 13:55:03.794336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cf50 (9): Bad file descriptor 00:25:23.076 [2024-11-06 13:55:03.794411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.076 [2024-11-06 13:55:03.794431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3cf50 with addr=10.0.0.3, port=4420 00:25:23.076 [2024-11-06 13:55:03.794441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cf50 is same with the state(6) to be set 00:25:23.076 [2024-11-06 13:55:03.794454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cf50 (9): Bad file descriptor 00:25:23.076 [2024-11-06 13:55:03.794467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:25:23.076 [2024-11-06 13:55:03.794476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:25:23.076 [2024-11-06 13:55:03.794486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:23.076 [2024-11-06 13:55:03.794495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:25:23.076 [2024-11-06 13:55:03.794505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:23.076 13:55:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:25:24.018 5439.00 IOPS, 21.25 MiB/s [2024-11-06T13:55:04.952Z] [2024-11-06 13:55:04.792959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.019 [2024-11-06 13:55:04.792996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3cf50 with addr=10.0.0.3, port=4420 00:25:24.019 [2024-11-06 13:55:04.793006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cf50 is same with the state(6) to be set 00:25:24.019 [2024-11-06 13:55:04.793022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cf50 (9): Bad file descriptor 00:25:24.019 [2024-11-06 13:55:04.793035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:25:24.019 [2024-11-06 13:55:04.793044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:25:24.019 [2024-11-06 13:55:04.793053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:24.019 [2024-11-06 13:55:04.793061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:25:24.019 [2024-11-06 13:55:04.793071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:24.956 3626.00 IOPS, 14.16 MiB/s [2024-11-06T13:55:05.889Z] [2024-11-06 13:55:05.791517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.956 [2024-11-06 13:55:05.791693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3cf50 with addr=10.0.0.3, port=4420 00:25:24.956 [2024-11-06 13:55:05.791804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cf50 is same with the state(6) to be set 00:25:24.956 [2024-11-06 13:55:05.791874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cf50 (9): Bad file descriptor 00:25:24.956 [2024-11-06 13:55:05.791934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:25:24.956 [2024-11-06 13:55:05.792027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:25:24.956 [2024-11-06 13:55:05.792080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:24.956 [2024-11-06 13:55:05.792109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:25:24.956 [2024-11-06 13:55:05.792159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:25.894 2719.50 IOPS, 10.62 MiB/s [2024-11-06T13:55:06.827Z] [2024-11-06 13:55:06.792427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.894 [2024-11-06 13:55:06.792591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3cf50 with addr=10.0.0.3, port=4420 00:25:25.894 [2024-11-06 13:55:06.792688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3cf50 is same with the state(6) to be set 00:25:25.894 [2024-11-06 13:55:06.792920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3cf50 (9): Bad file descriptor 00:25:25.894 [2024-11-06 13:55:06.793265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:25:25.894 [2024-11-06 13:55:06.793314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:25:25.894 [2024-11-06 13:55:06.793562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:25.894 [2024-11-06 13:55:06.793593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:25:25.894 [2024-11-06 13:55:06.793640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:25.894 13:55:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:26.153 [2024-11-06 13:55:07.014282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:26.153 13:55:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98246 00:25:27.090 2175.60 IOPS, 8.50 MiB/s [2024-11-06T13:55:08.023Z] [2024-11-06 13:55:07.814503] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:25:28.961 3302.00 IOPS, 12.90 MiB/s [2024-11-06T13:55:10.831Z] 4351.43 IOPS, 17.00 MiB/s [2024-11-06T13:55:11.767Z] 5120.75 IOPS, 20.00 MiB/s [2024-11-06T13:55:12.704Z] 5739.67 IOPS, 22.42 MiB/s [2024-11-06T13:55:12.704Z] 6225.00 IOPS, 24.32 MiB/s 00:25:31.771 Latency(us) 00:25:31.771 [2024-11-06T13:55:12.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.771 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:31.771 Verification LBA range: start 0x0 length 0x4000 00:25:31.771 NVMe0n1 : 10.01 6231.20 24.34 5519.55 0.00 10877.56 1559.44 3018551.31 00:25:31.771 [2024-11-06T13:55:12.704Z] =================================================================================================================== 00:25:31.771 [2024-11-06T13:55:12.704Z] Total : 6231.20 24.34 5519.55 0.00 10877.56 0.00 3018551.31 00:25:31.771 { 00:25:31.771 "results": [ 00:25:31.771 { 00:25:31.771 "job": "NVMe0n1", 00:25:31.771 "core_mask": "0x4", 00:25:31.771 "workload": "verify", 00:25:31.771 "status": "finished", 00:25:31.771 "verify_range": { 00:25:31.771 "start": 0, 00:25:31.771 "length": 16384 00:25:31.771 }, 00:25:31.771 "queue_depth": 128, 00:25:31.771 "io_size": 4096, 00:25:31.771 "runtime": 10.009146, 00:25:31.771 "iops": 6231.200943616968, 00:25:31.771 "mibps": 24.340628686003782, 00:25:31.771 "io_failed": 55246, 00:25:31.771 "io_timeout": 0, 00:25:31.771 "avg_latency_us": 10877.563321483014, 00:25:31.771 "min_latency_us": 1559.4409638554216, 00:25:31.771 "max_latency_us": 3018551.3124497994 00:25:31.771 } 00:25:31.771 ], 00:25:31.771 "core_count": 1 00:25:31.771 } 00:25:31.771 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 98072 00:25:31.771 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 98072 ']' 00:25:31.771 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 98072 00:25:31.771 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:25:31.771 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98072 00:25:32.030 killing process with pid 98072 00:25:32.030 Received shutdown signal, test time was about 10.000000 seconds 00:25:32.030 00:25:32.030 Latency(us) 00:25:32.030 [2024-11-06T13:55:12.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.030 [2024-11-06T13:55:12.963Z] =================================================================================================================== 00:25:32.030 [2024-11-06T13:55:12.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98072' 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 98072 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 98072 00:25:32.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98373 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98373 /var/tmp/bdevperf.sock 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 98373 ']' 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.030 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:32.031 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.031 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:32.031 13:55:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:32.031 [2024-11-06 13:55:12.946936] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:25:32.031 [2024-11-06 13:55:12.947008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98373 ] 00:25:32.290 [2024-11-06 13:55:13.095731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.290 [2024-11-06 13:55:13.134845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.227 13:55:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:33.227 13:55:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:25:33.227 13:55:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98373 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:33.227 13:55:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98397 00:25:33.227 13:55:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:33.227 13:55:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:33.486 NVMe0n1 00:25:33.486 13:55:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:33.486 13:55:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98450 00:25:33.486 13:55:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:25:33.745 Running I/O for 10 seconds... 00:25:34.685 13:55:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:34.685 21316.00 IOPS, 83.27 MiB/s [2024-11-06T13:55:15.618Z] [2024-11-06 13:55:15.566721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.566947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.566962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.566972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.566982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.566991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.567000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.567009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.567018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.685 [2024-11-06 13:55:15.567027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.686 [2024-11-06 13:55:15.567036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.686 [2024-11-06 13:55:15.567045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.686 [2024-11-06 13:55:15.567055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.686 [2024-11-06 13:55:15.567064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.686 [2024-11-06 13:55:15.567072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa8f0 is same with the state(6) to be set 00:25:34.686 [2024-11-06 13:55:15.567602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.567983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.567993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.686 [2024-11-06 13:55:15.568288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.686 [2024-11-06 13:55:15.568296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.568988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.568997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.569008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.687 [2024-11-06 13:55:15.569016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.687 [2024-11-06 13:55:15.569026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.688 [2024-11-06 13:55:15.569746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.688 [2024-11-06 13:55:15.569755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.569975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.569984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.570005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.570014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.570023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.689 [2024-11-06 13:55:15.570034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.570055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.689 [2024-11-06 13:55:15.570063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.689 [2024-11-06 13:55:15.570071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:25:34.689 [2024-11-06 13:55:15.570079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.570168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.689 [2024-11-06 13:55:15.570180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.570188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.689 [2024-11-06 13:55:15.570197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.570206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.689 [2024-11-06 13:55:15.570214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.570223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.689 [2024-11-06 13:55:15.570232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.689 [2024-11-06 13:55:15.570240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167df50 is same with the state(6) to be set 00:25:34.689 [2024-11-06 13:55:15.570428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:34.689 [2024-11-06 13:55:15.570453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167df50 (9): Bad file descriptor 00:25:34.689 [2024-11-06 13:55:15.570515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.689 [2024-11-06 13:55:15.570529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x167df50 with addr=10.0.0.3, port=4420 00:25:34.689 [2024-11-06 13:55:15.570538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167df50 is same with the state(6) to be set 00:25:34.689 [2024-11-06 13:55:15.570551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167df50 (9): Bad file descriptor 00:25:34.689 [2024-11-06 13:55:15.570563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:34.689 [2024-11-06 13:55:15.570572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:34.689 [2024-11-06 13:55:15.570581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:34.689 [2024-11-06 13:55:15.570590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:34.689 [2024-11-06 13:55:15.570599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:34.689 13:55:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98450 00:25:36.563 11948.50 IOPS, 46.67 MiB/s [2024-11-06T13:55:17.755Z] 7965.67 IOPS, 31.12 MiB/s [2024-11-06T13:55:17.755Z] [2024-11-06 13:55:17.567523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.822 [2024-11-06 13:55:17.567557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x167df50 with addr=10.0.0.3, port=4420 00:25:36.822 [2024-11-06 13:55:17.567569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167df50 is same with the state(6) to be set 00:25:36.822 [2024-11-06 13:55:17.567585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167df50 (9): Bad file descriptor 00:25:36.822 [2024-11-06 13:55:17.567599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:36.822 [2024-11-06 13:55:17.567609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:36.822 [2024-11-06 13:55:17.567619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:36.822 [2024-11-06 13:55:17.567628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:36.822 [2024-11-06 13:55:17.567637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:38.719 5974.25 IOPS, 23.34 MiB/s [2024-11-06T13:55:19.652Z] 4779.40 IOPS, 18.67 MiB/s [2024-11-06T13:55:19.652Z] [2024-11-06 13:55:19.564564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.719 [2024-11-06 13:55:19.564598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x167df50 with addr=10.0.0.3, port=4420 00:25:38.719 [2024-11-06 13:55:19.564610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167df50 is same with the state(6) to be set 00:25:38.719 [2024-11-06 13:55:19.564626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167df50 (9): Bad file descriptor 00:25:38.719 [2024-11-06 13:55:19.564639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:38.719 [2024-11-06 13:55:19.564649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:38.719 [2024-11-06 13:55:19.564659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:38.719 [2024-11-06 13:55:19.564668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:38.719 [2024-11-06 13:55:19.564677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:40.592 3982.83 IOPS, 15.56 MiB/s [2024-11-06T13:55:21.784Z] 3413.86 IOPS, 13.34 MiB/s [2024-11-06T13:55:21.784Z] [2024-11-06 13:55:21.561552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:40.851 [2024-11-06 13:55:21.561579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:25:40.851 [2024-11-06 13:55:21.561589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:25:40.851 [2024-11-06 13:55:21.561597] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:25:40.851 [2024-11-06 13:55:21.561607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:25:41.789 2987.12 IOPS, 11.67 MiB/s 00:25:41.789 Latency(us) 00:25:41.789 [2024-11-06T13:55:22.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.789 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:41.789 NVMe0n1 : 8.12 2941.68 11.49 15.76 0.00 43148.44 1737.10 7061253.96 00:25:41.789 [2024-11-06T13:55:22.722Z] =================================================================================================================== 00:25:41.789 [2024-11-06T13:55:22.722Z] Total : 2941.68 11.49 15.76 0.00 43148.44 1737.10 7061253.96 00:25:41.789 { 00:25:41.789 "results": [ 00:25:41.789 { 00:25:41.789 "job": "NVMe0n1", 00:25:41.789 "core_mask": "0x4", 00:25:41.789 "workload": "randread", 00:25:41.789 "status": "finished", 00:25:41.789 "queue_depth": 128, 00:25:41.789 "io_size": 4096, 00:25:41.789 "runtime": 8.123589, 00:25:41.789 "iops": 2941.6800874588807, 00:25:41.789 "mibps": 11.490937841636253, 00:25:41.789 "io_failed": 128, 00:25:41.789 "io_timeout": 0, 00:25:41.789 "avg_latency_us": 43148.43938180192, 00:25:41.789 "min_latency_us": 1737.0987951807228, 00:25:41.789 "max_latency_us": 7061253.963052209 00:25:41.789 } 00:25:41.789 ], 00:25:41.789 "core_count": 1 00:25:41.789 } 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:41.789 Attaching 5 probes... 00:25:41.789 1175.958509: reset bdev controller NVMe0 00:25:41.789 1176.016224: reconnect bdev controller NVMe0 00:25:41.789 3173.001712: reconnect delay bdev controller NVMe0 00:25:41.789 3173.015204: reconnect bdev controller NVMe0 00:25:41.789 5170.036839: reconnect delay bdev controller NVMe0 00:25:41.789 5170.049957: reconnect bdev controller NVMe0 00:25:41.789 7167.075036: reconnect delay bdev controller NVMe0 00:25:41.789 7167.088446: reconnect bdev controller NVMe0 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98397 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98373 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 98373 ']' 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 98373 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98373 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:41.789 killing process with pid 98373 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98373' 00:25:41.789 Received shutdown signal, test time was about 8.206827 seconds 00:25:41.789 00:25:41.789 Latency(us) 00:25:41.789 [2024-11-06T13:55:22.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.789 [2024-11-06T13:55:22.722Z] =================================================================================================================== 00:25:41.789 [2024-11-06T13:55:22.722Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 98373 00:25:41.789 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 98373 00:25:42.049 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.308 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:42.308 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:25:42.308 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.308 13:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.308 rmmod nvme_tcp 00:25:42.308 rmmod nvme_fabrics 00:25:42.308 rmmod nvme_keyring 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 97780 ']' 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 97780 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97780 ']' 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97780 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97780 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:42.308 killing process with pid 97780 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97780' 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97780 00:25:42.308 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97780 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.877 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.137 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:25:43.137 00:25:43.137 real 0m46.647s 00:25:43.137 user 2m13.628s 00:25:43.137 sys 0m6.564s 00:25:43.137 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:43.137 ************************************ 00:25:43.137 END TEST nvmf_timeout 00:25:43.137 ************************************ 00:25:43.137 13:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.137 13:55:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:25:43.137 13:55:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:43.137 00:25:43.137 real 5m45.528s 00:25:43.137 user 14m8.716s 00:25:43.137 sys 1m24.483s 00:25:43.137 13:55:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:43.137 ************************************ 00:25:43.137 13:55:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.137 END TEST nvmf_host 00:25:43.137 ************************************ 00:25:43.137 13:55:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:43.137 13:55:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:25:43.137 13:55:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:43.137 13:55:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:43.137 13:55:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:43.137 13:55:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:43.137 ************************************ 00:25:43.137 START TEST nvmf_target_core_interrupt_mode 00:25:43.137 ************************************ 00:25:43.137 13:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:43.397 * Looking for test storage... 00:25:43.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.397 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.397 --rc genhtml_branch_coverage=1 00:25:43.397 --rc genhtml_function_coverage=1 00:25:43.398 --rc genhtml_legend=1 00:25:43.398 --rc geninfo_all_blocks=1 00:25:43.398 --rc geninfo_unexecuted_blocks=1 00:25:43.398 00:25:43.398 ' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.398 --rc genhtml_branch_coverage=1 00:25:43.398 --rc genhtml_function_coverage=1 00:25:43.398 --rc genhtml_legend=1 00:25:43.398 --rc geninfo_all_blocks=1 00:25:43.398 --rc geninfo_unexecuted_blocks=1 00:25:43.398 00:25:43.398 ' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.398 --rc genhtml_branch_coverage=1 00:25:43.398 --rc genhtml_function_coverage=1 00:25:43.398 --rc genhtml_legend=1 00:25:43.398 --rc geninfo_all_blocks=1 00:25:43.398 --rc geninfo_unexecuted_blocks=1 00:25:43.398 00:25:43.398 ' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.398 --rc genhtml_branch_coverage=1 00:25:43.398 --rc genhtml_function_coverage=1 00:25:43.398 --rc genhtml_legend=1 00:25:43.398 --rc geninfo_all_blocks=1 00:25:43.398 --rc geninfo_unexecuted_blocks=1 00:25:43.398 00:25:43.398 ' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:43.398 ************************************ 00:25:43.398 START TEST nvmf_abort 00:25:43.398 ************************************ 00:25:43.398 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:43.659 * Looking for test storage... 00:25:43.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:43.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.659 --rc genhtml_branch_coverage=1 00:25:43.659 --rc genhtml_function_coverage=1 00:25:43.659 --rc genhtml_legend=1 00:25:43.659 --rc geninfo_all_blocks=1 00:25:43.659 --rc geninfo_unexecuted_blocks=1 00:25:43.659 00:25:43.659 ' 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:43.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.659 --rc genhtml_branch_coverage=1 00:25:43.659 --rc genhtml_function_coverage=1 00:25:43.659 --rc genhtml_legend=1 00:25:43.659 --rc geninfo_all_blocks=1 00:25:43.659 --rc geninfo_unexecuted_blocks=1 00:25:43.659 00:25:43.659 ' 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:43.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.659 --rc genhtml_branch_coverage=1 00:25:43.659 --rc genhtml_function_coverage=1 00:25:43.659 --rc genhtml_legend=1 00:25:43.659 --rc geninfo_all_blocks=1 00:25:43.659 --rc geninfo_unexecuted_blocks=1 00:25:43.659 00:25:43.659 ' 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:43.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.659 --rc genhtml_branch_coverage=1 00:25:43.659 --rc genhtml_function_coverage=1 00:25:43.659 --rc genhtml_legend=1 00:25:43.659 --rc geninfo_all_blocks=1 00:25:43.659 --rc geninfo_unexecuted_blocks=1 00:25:43.659 00:25:43.659 ' 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:25:43.659 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.660 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:43.920 Cannot find device "nvmf_init_br" 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:43.920 Cannot find device "nvmf_init_br2" 00:25:43.920 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:43.921 Cannot find device "nvmf_tgt_br" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:43.921 Cannot find device "nvmf_tgt_br2" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:43.921 Cannot find device "nvmf_init_br" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:43.921 Cannot find device "nvmf_init_br2" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:43.921 Cannot find device "nvmf_tgt_br" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:43.921 Cannot find device "nvmf_tgt_br2" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:43.921 Cannot find device "nvmf_br" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:43.921 Cannot find device "nvmf_init_if" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:43.921 Cannot find device "nvmf_init_if2" 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:43.921 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:43.921 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:43.921 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:44.181 13:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:44.181 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:44.181 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:25:44.181 00:25:44.181 --- 10.0.0.3 ping statistics --- 00:25:44.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.181 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:44.181 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:44.181 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:25:44.181 00:25:44.181 --- 10.0.0.4 ping statistics --- 00:25:44.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.181 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:44.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:25:44.181 00:25:44.181 --- 10.0.0.1 ping statistics --- 00:25:44.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.181 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:44.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:25:44.181 00:25:44.181 --- 10.0.0.2 ping statistics --- 00:25:44.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.181 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:44.181 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=98882 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 98882 00:25:44.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 98882 ']' 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:44.441 13:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:44.441 [2024-11-06 13:55:25.192651] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:44.441 [2024-11-06 13:55:25.193577] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:25:44.441 [2024-11-06 13:55:25.193631] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.441 [2024-11-06 13:55:25.348914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:44.701 [2024-11-06 13:55:25.389021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.701 [2024-11-06 13:55:25.389286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.701 [2024-11-06 13:55:25.389481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.701 [2024-11-06 13:55:25.389536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.701 [2024-11-06 13:55:25.389567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.701 [2024-11-06 13:55:25.390601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.701 [2024-11-06 13:55:25.390781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.701 [2024-11-06 13:55:25.390787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.701 [2024-11-06 13:55:25.464562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:44.701 [2024-11-06 13:55:25.465210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:44.701 [2024-11-06 13:55:25.465251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:44.701 [2024-11-06 13:55:25.466115] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:45.270 [2024-11-06 13:55:26.133527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:45.270 Malloc0 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.270 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:45.529 Delay0 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:45.530 [2024-11-06 13:55:26.241549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.530 13:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:25:45.530 [2024-11-06 13:55:26.442807] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:48.061 Initializing NVMe Controllers 00:25:48.061 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:25:48.061 controller IO queue size 128 less than required 00:25:48.061 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:25:48.061 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:48.061 Initialization complete. Launching workers. 00:25:48.061 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 39064 00:25:48.061 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39121, failed to submit 66 00:25:48.061 success 39064, unsuccessful 57, failed 0 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.061 rmmod nvme_tcp 00:25:48.061 rmmod nvme_fabrics 00:25:48.061 rmmod nvme_keyring 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 98882 ']' 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 98882 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 98882 ']' 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 98882 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98882 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98882' 00:25:48.061 killing process with pid 98882 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 98882 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 98882 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:48.061 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:48.062 13:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:25:48.321 00:25:48.321 real 0m4.868s 00:25:48.321 user 0m9.160s 00:25:48.321 sys 0m1.786s 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:48.321 ************************************ 00:25:48.321 END TEST nvmf_abort 00:25:48.321 ************************************ 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:48.321 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:48.321 ************************************ 00:25:48.321 START TEST nvmf_ns_hotplug_stress 00:25:48.321 ************************************ 00:25:48.581 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:48.581 * Looking for test storage... 00:25:48.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:48.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.582 --rc genhtml_branch_coverage=1 00:25:48.582 --rc genhtml_function_coverage=1 00:25:48.582 --rc genhtml_legend=1 00:25:48.582 --rc geninfo_all_blocks=1 00:25:48.582 --rc geninfo_unexecuted_blocks=1 00:25:48.582 00:25:48.582 ' 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:48.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.582 --rc genhtml_branch_coverage=1 00:25:48.582 --rc genhtml_function_coverage=1 00:25:48.582 --rc genhtml_legend=1 00:25:48.582 --rc geninfo_all_blocks=1 00:25:48.582 --rc geninfo_unexecuted_blocks=1 00:25:48.582 00:25:48.582 ' 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:48.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.582 --rc genhtml_branch_coverage=1 00:25:48.582 --rc genhtml_function_coverage=1 00:25:48.582 --rc genhtml_legend=1 00:25:48.582 --rc geninfo_all_blocks=1 00:25:48.582 --rc geninfo_unexecuted_blocks=1 00:25:48.582 00:25:48.582 ' 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:48.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.582 --rc genhtml_branch_coverage=1 00:25:48.582 --rc genhtml_function_coverage=1 00:25:48.582 --rc genhtml_legend=1 00:25:48.582 --rc geninfo_all_blocks=1 00:25:48.582 --rc geninfo_unexecuted_blocks=1 00:25:48.582 00:25:48.582 ' 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.582 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:48.843 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:48.844 Cannot find device "nvmf_init_br" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:48.844 Cannot find device "nvmf_init_br2" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:48.844 Cannot find device "nvmf_tgt_br" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:48.844 Cannot find device "nvmf_tgt_br2" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:48.844 Cannot find device "nvmf_init_br" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:48.844 Cannot find device "nvmf_init_br2" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:48.844 Cannot find device "nvmf_tgt_br" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:48.844 Cannot find device "nvmf_tgt_br2" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:48.844 Cannot find device "nvmf_br" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:48.844 Cannot find device "nvmf_init_if" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:48.844 Cannot find device "nvmf_init_if2" 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:48.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:48.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:48.844 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:49.103 13:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:49.103 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:49.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:49.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:25:49.103 00:25:49.103 --- 10.0.0.3 ping statistics --- 00:25:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.103 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:25:49.103 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:49.103 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:49.103 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:25:49.103 00:25:49.103 --- 10.0.0.4 ping statistics --- 00:25:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.103 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:49.103 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:49.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:25:49.103 00:25:49.103 --- 10.0.0.1 ping statistics --- 00:25:49.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.103 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:25:49.103 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:49.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:25:49.363 00:25:49.363 --- 10.0.0.2 ping statistics --- 00:25:49.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.363 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=99209 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 99209 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 99209 ']' 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:49.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:49.363 13:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:49.363 [2024-11-06 13:55:30.132983] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:49.363 [2024-11-06 13:55:30.133779] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:25:49.363 [2024-11-06 13:55:30.133823] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.363 [2024-11-06 13:55:30.287322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.622 [2024-11-06 13:55:30.326776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.622 [2024-11-06 13:55:30.326825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.622 [2024-11-06 13:55:30.326835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.622 [2024-11-06 13:55:30.326844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.622 [2024-11-06 13:55:30.326851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.622 [2024-11-06 13:55:30.327827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.622 [2024-11-06 13:55:30.327936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.622 [2024-11-06 13:55:30.327942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.622 [2024-11-06 13:55:30.400186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:49.622 [2024-11-06 13:55:30.400899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:49.622 [2024-11-06 13:55:30.401173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:49.622 [2024-11-06 13:55:30.402116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:50.192 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:50.192 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:25:50.192 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.192 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:50.192 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:50.192 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.192 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:25:50.192 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:50.452 [2024-11-06 13:55:31.269055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.452 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:50.713 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:50.973 [2024-11-06 13:55:31.721616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:50.973 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:51.232 13:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:25:51.491 Malloc0 00:25:51.491 13:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:51.491 Delay0 00:25:51.491 13:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:51.750 13:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:25:52.010 NULL1 00:25:52.010 13:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:25:52.269 13:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=99333 00:25:52.269 13:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:25:52.269 13:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:25:52.269 13:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:53.647 Read completed with error (sct=0, sc=11) 00:25:53.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:53.647 13:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:53.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:53.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:53.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:53.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:53.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:53.647 13:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:25:53.647 13:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:25:53.906 true 00:25:53.906 13:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:25:53.906 13:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:54.843 13:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:54.843 13:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:25:54.843 13:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:25:55.104 true 00:25:55.104 13:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:25:55.104 13:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:55.363 13:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:55.622 13:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:25:55.622 13:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:25:55.622 true 00:25:55.881 13:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:25:55.881 13:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:56.817 13:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:57.076 13:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:25:57.076 13:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:25:57.076 true 00:25:57.076 13:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:25:57.076 13:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:57.335 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:57.594 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:25:57.594 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:25:57.853 true 00:25:57.853 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:25:57.853 13:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:58.791 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:59.050 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:25:59.050 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:25:59.050 true 00:25:59.050 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:25:59.050 13:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:59.309 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:59.569 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:25:59.569 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:25:59.828 true 00:25:59.828 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:25:59.828 13:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:00.765 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:01.024 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:01.024 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:01.283 true 00:26:01.283 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:01.283 13:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:01.283 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:01.542 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:01.542 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:01.801 true 00:26:01.802 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:01.802 13:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:02.738 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:02.998 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:02.998 13:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:03.257 true 00:26:03.257 13:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:03.257 13:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:03.528 13:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:03.528 13:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:03.528 13:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:03.786 true 00:26:03.786 13:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:03.786 13:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:04.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:04.723 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:04.983 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:04.983 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:05.242 true 00:26:05.242 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:05.242 13:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:05.501 13:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:05.501 13:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:05.501 13:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:05.761 true 00:26:05.761 13:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:05.761 13:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:06.697 13:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:06.956 13:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:06.956 13:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:07.214 true 00:26:07.214 13:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:07.214 13:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:07.473 13:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:07.731 13:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:07.731 13:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:07.731 true 00:26:07.731 13:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:07.731 13:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:08.666 13:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:08.924 13:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:08.924 13:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:09.183 true 00:26:09.183 13:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:09.183 13:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:09.442 13:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:09.700 13:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:09.700 13:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:09.700 true 00:26:09.700 13:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:09.700 13:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:11.078 13:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:11.078 13:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:11.078 13:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:11.078 true 00:26:11.337 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:11.337 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:11.337 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:11.596 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:11.596 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:11.855 true 00:26:11.855 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:11.855 13:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:12.791 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:13.050 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:13.050 13:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:13.309 true 00:26:13.309 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:13.309 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:13.309 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:13.568 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:13.568 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:13.826 true 00:26:13.826 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:13.826 13:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:14.764 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:15.022 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:15.022 13:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:15.281 true 00:26:15.281 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:15.281 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:15.540 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:15.540 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:15.540 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:15.799 true 00:26:15.799 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:15.799 13:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:16.739 13:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:16.998 13:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:16.998 13:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:17.257 true 00:26:17.257 13:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:17.257 13:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.515 13:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.774 13:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:17.774 13:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:17.774 true 00:26:17.774 13:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:17.774 13:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.153 13:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:19.153 13:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:19.153 13:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:19.153 true 00:26:19.153 13:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:19.153 13:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.412 13:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:19.670 13:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:19.670 13:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:19.930 true 00:26:19.930 13:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:19.930 13:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:20.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.867 13:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:21.126 13:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:26:21.126 13:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:26:21.385 true 00:26:21.385 13:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:21.385 13:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:21.385 13:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:21.644 13:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:26:21.644 13:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:26:21.904 true 00:26:21.904 13:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:21.904 13:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.840 Initializing NVMe Controllers 00:26:22.841 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.841 Controller IO queue size 128, less than required. 00:26:22.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:22.841 Controller IO queue size 128, less than required. 00:26:22.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:22.841 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.841 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:22.841 Initialization complete. Launching workers. 00:26:22.841 ======================================================== 00:26:22.841 Latency(us) 00:26:22.841 Device Information : IOPS MiB/s Average min max 00:26:22.841 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 372.03 0.18 195259.57 2731.48 1013536.94 00:26:22.841 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14719.04 7.19 8696.41 1723.86 433949.70 00:26:22.841 ======================================================== 00:26:22.841 Total : 15091.07 7.37 13295.66 1723.86 1013536.94 00:26:22.841 00:26:22.841 13:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.100 13:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:26:23.100 13:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:26:23.359 true 00:26:23.359 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99333 00:26:23.359 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (99333) - No such process 00:26:23.359 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 99333 00:26:23.359 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:23.618 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:23.618 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:23.618 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:23.618 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:23.618 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:23.618 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:23.877 null0 00:26:23.877 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:23.877 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:23.877 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:24.137 null1 00:26:24.137 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:24.137 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:24.137 13:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:24.396 null2 00:26:24.396 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:24.396 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:24.396 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:24.655 null3 00:26:24.655 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:24.655 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:24.655 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:24.655 null4 00:26:24.655 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:24.655 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:24.655 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:24.914 null5 00:26:24.915 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:24.915 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:24.915 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:25.174 null6 00:26:25.174 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:25.174 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:25.174 13:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:25.434 null7 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 100376 100378 100380 100381 100383 100385 100388 100390 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.434 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.693 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:25.952 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:25.952 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:25.952 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:25.953 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:26.212 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:26.212 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:26.212 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:26.212 13:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.212 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:26.471 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:26.729 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.730 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.730 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:26.730 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:26.730 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.730 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.730 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:26.730 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:26.988 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.988 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.988 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:26.988 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.988 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:26.989 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:27.247 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:27.247 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.247 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.247 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:27.247 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.247 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.247 13:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:27.247 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:27.247 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.247 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.247 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:27.247 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:27.247 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.247 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.248 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:27.248 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.248 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.248 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:27.506 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:27.765 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:27.766 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:28.026 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:28.027 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:28.296 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.296 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.296 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:28.296 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:28.296 13:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:28.296 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:28.554 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.555 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:28.813 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:28.814 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:29.072 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.073 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.073 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:29.073 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.073 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.073 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:29.073 13:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.073 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.073 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.331 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.590 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:29.849 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:30.109 13:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:30.109 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.368 rmmod nvme_tcp 00:26:30.368 rmmod nvme_fabrics 00:26:30.368 rmmod nvme_keyring 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 99209 ']' 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 99209 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 99209 ']' 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 99209 00:26:30.368 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 99209 00:26:30.628 killing process with pid 99209 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 99209' 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 99209 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 99209 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:30.628 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.887 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.146 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:26:31.146 00:26:31.146 real 0m42.579s 00:26:31.146 user 2m57.699s 00:26:31.146 sys 0m19.539s 00:26:31.146 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:31.146 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:31.146 ************************************ 00:26:31.146 END TEST nvmf_ns_hotplug_stress 00:26:31.146 ************************************ 00:26:31.146 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:31.146 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:31.146 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:31.147 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:31.147 ************************************ 00:26:31.147 START TEST nvmf_delete_subsystem 00:26:31.147 ************************************ 00:26:31.147 13:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:31.147 * Looking for test storage... 00:26:31.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:31.147 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:31.147 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:31.147 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:26:31.407 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:31.407 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.407 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.407 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.407 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.407 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.407 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.407 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:31.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.408 --rc genhtml_branch_coverage=1 00:26:31.408 --rc genhtml_function_coverage=1 00:26:31.408 --rc genhtml_legend=1 00:26:31.408 --rc geninfo_all_blocks=1 00:26:31.408 --rc geninfo_unexecuted_blocks=1 00:26:31.408 00:26:31.408 ' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:31.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.408 --rc genhtml_branch_coverage=1 00:26:31.408 --rc genhtml_function_coverage=1 00:26:31.408 --rc genhtml_legend=1 00:26:31.408 --rc geninfo_all_blocks=1 00:26:31.408 --rc geninfo_unexecuted_blocks=1 00:26:31.408 00:26:31.408 ' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:31.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.408 --rc genhtml_branch_coverage=1 00:26:31.408 --rc genhtml_function_coverage=1 00:26:31.408 --rc genhtml_legend=1 00:26:31.408 --rc geninfo_all_blocks=1 00:26:31.408 --rc geninfo_unexecuted_blocks=1 00:26:31.408 00:26:31.408 ' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:31.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.408 --rc genhtml_branch_coverage=1 00:26:31.408 --rc genhtml_function_coverage=1 00:26:31.408 --rc genhtml_legend=1 00:26:31.408 --rc geninfo_all_blocks=1 00:26:31.408 --rc geninfo_unexecuted_blocks=1 00:26:31.408 00:26:31.408 ' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.408 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:31.409 Cannot find device "nvmf_init_br" 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:31.409 Cannot find device "nvmf_init_br2" 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:31.409 Cannot find device "nvmf_tgt_br" 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.409 Cannot find device "nvmf_tgt_br2" 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:31.409 Cannot find device "nvmf_init_br" 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:31.409 Cannot find device "nvmf_init_br2" 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:31.409 Cannot find device "nvmf_tgt_br" 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:26:31.409 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:31.669 Cannot find device "nvmf_tgt_br2" 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:31.669 Cannot find device "nvmf_br" 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:31.669 Cannot find device "nvmf_init_if" 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:31.669 Cannot find device "nvmf_init_if2" 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:31.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:26:31.669 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:31.670 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:31.929 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:31.929 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:26:31.929 00:26:31.929 --- 10.0.0.3 ping statistics --- 00:26:31.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.929 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:31.929 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:31.929 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:26:31.929 00:26:31.929 --- 10.0.0.4 ping statistics --- 00:26:31.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.929 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:31.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:26:31.929 00:26:31.929 --- 10.0.0.1 ping statistics --- 00:26:31.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.929 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:26:31.929 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:31.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:26:31.929 00:26:31.930 --- 10.0.0.2 ping statistics --- 00:26:31.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.930 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=101788 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 101788 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 101788 ']' 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:31.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:31.930 13:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:31.930 [2024-11-06 13:56:12.807925] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:31.930 [2024-11-06 13:56:12.808903] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:26:31.930 [2024-11-06 13:56:12.808959] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.189 [2024-11-06 13:56:12.945450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:32.189 [2024-11-06 13:56:13.012421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.189 [2024-11-06 13:56:13.012466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.189 [2024-11-06 13:56:13.012475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.189 [2024-11-06 13:56:13.012483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.189 [2024-11-06 13:56:13.012489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.189 [2024-11-06 13:56:13.013824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.189 [2024-11-06 13:56:13.013830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.449 [2024-11-06 13:56:13.136151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:32.449 [2024-11-06 13:56:13.136843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:32.449 [2024-11-06 13:56:13.137124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:33.019 [2024-11-06 13:56:13.787001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:33.019 [2024-11-06 13:56:13.815556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:33.019 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:33.020 NULL1 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:33.020 Delay0 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=101839 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:33.020 13:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:33.279 [2024-11-06 13:56:14.039736] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:35.183 13:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:35.183 13:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.183 13:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:35.183 Read completed with error (sct=0, sc=8) 00:26:35.183 Read completed with error (sct=0, sc=8) 00:26:35.183 starting I/O failed: -6 00:26:35.183 Read completed with error (sct=0, sc=8) 00:26:35.183 Read completed with error (sct=0, sc=8) 00:26:35.183 Read completed with error (sct=0, sc=8) 00:26:35.183 Write completed with error (sct=0, sc=8) 00:26:35.183 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 [2024-11-06 13:56:16.079527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1484000c40 is same with the state(6) to be set 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Write completed with error (sct=0, sc=8) 00:26:35.184 starting I/O failed: -6 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.184 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 [2024-11-06 13:56:16.080691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf7e0 is same with the state(6) to be set 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:35.185 Read completed with error (sct=0, sc=8) 00:26:35.185 Write completed with error (sct=0, sc=8) 00:26:36.121 [2024-11-06 13:56:17.052261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaaee0 is same with the state(6) to be set 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.380 Write completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.380 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 [2024-11-06 13:56:17.077221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaea50 is same with the state(6) to be set 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 [2024-11-06 13:56:17.077914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f148400d020 is same with the state(6) to be set 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 [2024-11-06 13:56:17.078115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1ea0 is same with the state(6) to be set 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Read completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 Write completed with error (sct=0, sc=8) 00:26:36.381 [2024-11-06 13:56:17.078436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f148400d800 is same with the state(6) to be set 00:26:36.381 Initializing NVMe Controllers 00:26:36.381 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.381 Controller IO queue size 128, less than required. 00:26:36.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:36.381 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:36.381 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:36.381 Initialization complete. Launching workers. 00:26:36.381 ======================================================== 00:26:36.381 Latency(us) 00:26:36.381 Device Information : IOPS MiB/s Average min max 00:26:36.381 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.89 0.09 884524.65 355.36 1019469.99 00:26:36.381 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.40 0.09 884058.78 745.58 1020575.13 00:26:36.381 ======================================================== 00:26:36.381 Total : 351.29 0.17 884292.04 355.36 1020575.13 00:26:36.381 00:26:36.381 [2024-11-06 13:56:17.079617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaaee0 (9): Bad file descriptor 00:26:36.381 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:36.381 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.381 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:36.381 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101839 00:26:36.381 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101839 00:26:36.950 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (101839) - No such process 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 101839 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 101839 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 101839 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.950 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:36.951 [2024-11-06 13:56:17.611604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=101885 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101885 00:26:36.951 13:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:36.951 [2024-11-06 13:56:17.813441] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:37.210 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:37.210 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101885 00:26:37.210 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:37.778 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:37.778 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101885 00:26:37.778 13:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:38.346 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:38.346 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101885 00:26:38.346 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:38.915 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:38.915 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101885 00:26:38.915 13:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:39.483 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:39.483 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101885 00:26:39.483 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:39.742 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:39.742 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101885 00:26:39.742 13:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:40.002 Initializing NVMe Controllers 00:26:40.002 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.002 Controller IO queue size 128, less than required. 00:26:40.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.002 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:40.002 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:40.002 Initialization complete. Launching workers. 00:26:40.002 ======================================================== 00:26:40.002 Latency(us) 00:26:40.002 Device Information : IOPS MiB/s Average min max 00:26:40.002 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004743.36 1000205.99 1019316.23 00:26:40.002 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006997.59 1000271.88 1020025.32 00:26:40.002 ======================================================== 00:26:40.002 Total : 256.00 0.12 1005870.48 1000205.99 1020025.32 00:26:40.002 00:26:40.261 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:40.261 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101885 00:26:40.261 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (101885) - No such process 00:26:40.261 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 101885 00:26:40.261 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:40.261 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:26:40.261 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:40.261 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:40.562 rmmod nvme_tcp 00:26:40.562 rmmod nvme_fabrics 00:26:40.562 rmmod nvme_keyring 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 101788 ']' 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 101788 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 101788 ']' 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 101788 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101788 00:26:40.562 killing process with pid 101788 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101788' 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 101788 00:26:40.562 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 101788 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:40.822 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:26:41.082 ************************************ 00:26:41.082 END TEST nvmf_delete_subsystem 00:26:41.082 ************************************ 00:26:41.082 00:26:41.082 real 0m10.027s 00:26:41.082 user 0m24.771s 00:26:41.082 sys 0m2.462s 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:41.082 13:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:41.082 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:41.082 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:41.082 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:41.082 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:41.341 ************************************ 00:26:41.341 START TEST nvmf_host_management 00:26:41.341 ************************************ 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:41.341 * Looking for test storage... 00:26:41.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.341 --rc genhtml_branch_coverage=1 00:26:41.341 --rc genhtml_function_coverage=1 00:26:41.341 --rc genhtml_legend=1 00:26:41.341 --rc geninfo_all_blocks=1 00:26:41.341 --rc geninfo_unexecuted_blocks=1 00:26:41.341 00:26:41.341 ' 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.341 --rc genhtml_branch_coverage=1 00:26:41.341 --rc genhtml_function_coverage=1 00:26:41.341 --rc genhtml_legend=1 00:26:41.341 --rc geninfo_all_blocks=1 00:26:41.341 --rc geninfo_unexecuted_blocks=1 00:26:41.341 00:26:41.341 ' 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.341 --rc genhtml_branch_coverage=1 00:26:41.341 --rc genhtml_function_coverage=1 00:26:41.341 --rc genhtml_legend=1 00:26:41.341 --rc geninfo_all_blocks=1 00:26:41.341 --rc geninfo_unexecuted_blocks=1 00:26:41.341 00:26:41.341 ' 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.341 --rc genhtml_branch_coverage=1 00:26:41.341 --rc genhtml_function_coverage=1 00:26:41.341 --rc genhtml_legend=1 00:26:41.341 --rc geninfo_all_blocks=1 00:26:41.341 --rc geninfo_unexecuted_blocks=1 00:26:41.341 00:26:41.341 ' 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.341 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.601 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:41.602 Cannot find device "nvmf_init_br" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:41.602 Cannot find device "nvmf_init_br2" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:41.602 Cannot find device "nvmf_tgt_br" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:41.602 Cannot find device "nvmf_tgt_br2" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:41.602 Cannot find device "nvmf_init_br" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:41.602 Cannot find device "nvmf_init_br2" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:41.602 Cannot find device "nvmf_tgt_br" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:41.602 Cannot find device "nvmf_tgt_br2" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:41.602 Cannot find device "nvmf_br" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:41.602 Cannot find device "nvmf_init_if" 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:26:41.602 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:41.862 Cannot find device "nvmf_init_if2" 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:41.862 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:42.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:42.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.135 ms 00:26:42.122 00:26:42.122 --- 10.0.0.3 ping statistics --- 00:26:42.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.122 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:42.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:42.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:26:42.122 00:26:42.122 --- 10.0.0.4 ping statistics --- 00:26:42.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.122 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:42.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:26:42.122 00:26:42.122 --- 10.0.0.1 ping statistics --- 00:26:42.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.122 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:42.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:26:42.122 00:26:42.122 --- 10.0.0.2 ping statistics --- 00:26:42.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.122 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=102171 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 102171 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 102171 ']' 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:26:42.122 13:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:42.122 [2024-11-06 13:56:22.951108] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:42.122 [2024-11-06 13:56:22.952182] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:26:42.122 [2024-11-06 13:56:22.952232] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.382 [2024-11-06 13:56:23.106828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.382 [2024-11-06 13:56:23.148185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.382 [2024-11-06 13:56:23.148235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.382 [2024-11-06 13:56:23.148245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.382 [2024-11-06 13:56:23.148253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.382 [2024-11-06 13:56:23.148261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.382 [2024-11-06 13:56:23.149598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.382 [2024-11-06 13:56:23.150422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.382 [2024-11-06 13:56:23.150628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:42.382 [2024-11-06 13:56:23.150650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.382 [2024-11-06 13:56:23.223795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:42.382 [2024-11-06 13:56:23.225813] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:42.382 [2024-11-06 13:56:23.226116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:42.382 [2024-11-06 13:56:23.226116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:42.382 [2024-11-06 13:56:23.226995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:42.950 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:42.950 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:26:42.950 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:42.950 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.950 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:43.209 [2024-11-06 13:56:23.907955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.209 13:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:43.209 Malloc0 00:26:43.209 [2024-11-06 13:56:24.020321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=102243 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 102243 /var/tmp/bdevperf.sock 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 102243 ']' 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:43.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.209 { 00:26:43.209 "params": { 00:26:43.209 "name": "Nvme$subsystem", 00:26:43.209 "trtype": "$TEST_TRANSPORT", 00:26:43.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.209 "adrfam": "ipv4", 00:26:43.209 "trsvcid": "$NVMF_PORT", 00:26:43.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.209 "hdgst": ${hdgst:-false}, 00:26:43.209 "ddgst": ${ddgst:-false} 00:26:43.209 }, 00:26:43.209 "method": "bdev_nvme_attach_controller" 00:26:43.209 } 00:26:43.209 EOF 00:26:43.209 )") 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:43.209 13:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:43.209 "params": { 00:26:43.209 "name": "Nvme0", 00:26:43.209 "trtype": "tcp", 00:26:43.209 "traddr": "10.0.0.3", 00:26:43.209 "adrfam": "ipv4", 00:26:43.209 "trsvcid": "4420", 00:26:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:43.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:43.210 "hdgst": false, 00:26:43.210 "ddgst": false 00:26:43.210 }, 00:26:43.210 "method": "bdev_nvme_attach_controller" 00:26:43.210 }' 00:26:43.469 [2024-11-06 13:56:24.148584] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:26:43.469 [2024-11-06 13:56:24.148655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102243 ] 00:26:43.469 [2024-11-06 13:56:24.304357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.469 [2024-11-06 13:56:24.365218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.729 Running I/O for 10 seconds... 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1005 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1005 -ge 100 ']' 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.300 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:44.300 [2024-11-06 13:56:25.116892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.116934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.116954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.116964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.116975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.116984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.116994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.300 [2024-11-06 13:56:25.117314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.300 [2024-11-06 13:56:25.117322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.117987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.117995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.118006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.118014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.118024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.118033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.118042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.118051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.118061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.301 [2024-11-06 13:56:25.118069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.301 [2024-11-06 13:56:25.118080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.302 [2024-11-06 13:56:25.118088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.302 [2024-11-06 13:56:25.118098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.302 [2024-11-06 13:56:25.118107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.302 [2024-11-06 13:56:25.118117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.302 [2024-11-06 13:56:25.118126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.302 [2024-11-06 13:56:25.118136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.302 [2024-11-06 13:56:25.118144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.302 task offset: 8320 on job bdev=Nvme0n1 fails 00:26:44.302 00:26:44.302 Latency(us) 00:26:44.302 [2024-11-06T13:56:25.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.302 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:44.302 Job: Nvme0n1 ended in about 0.54 seconds with error 00:26:44.302 Verification LBA range: start 0x0 length 0x400 00:26:44.302 Nvme0n1 : 0.54 2024.69 126.54 119.10 0.00 29192.94 2539.85 27583.02 00:26:44.302 [2024-11-06T13:56:25.235Z] =================================================================================================================== 00:26:44.302 [2024-11-06T13:56:25.235Z] Total : 2024.69 126.54 119.10 0.00 29192.94 2539.85 27583.02 00:26:44.302 [2024-11-06 13:56:25.119137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.302 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.302 [2024-11-06 13:56:25.120797] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:44.302 [2024-11-06 13:56:25.120821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5c660 (9): 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:44.302 Bad file descriptor 00:26:44.302 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.302 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:44.302 [2024-11-06 13:56:25.121617] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:26:44.302 [2024-11-06 13:56:25.121684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:44.302 [2024-11-06 13:56:25.121707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.302 [2024-11-06 13:56:25.121721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:26:44.302 [2024-11-06 13:56:25.121731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:26:44.302 [2024-11-06 13:56:25.121740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.302 [2024-11-06 13:56:25.121749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d5c660 00:26:44.302 [2024-11-06 13:56:25.121777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5c660 (9): Bad file descriptor 00:26:44.302 [2024-11-06 13:56:25.121792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.302 [2024-11-06 13:56:25.121802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.302 [2024-11-06 13:56:25.121813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.302 [2024-11-06 13:56:25.121823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.302 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.302 13:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 102243 00:26:45.240 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (102243) - No such process 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.240 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.240 { 00:26:45.240 "params": { 00:26:45.240 "name": "Nvme$subsystem", 00:26:45.240 "trtype": "$TEST_TRANSPORT", 00:26:45.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.241 "adrfam": "ipv4", 00:26:45.241 "trsvcid": "$NVMF_PORT", 00:26:45.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.241 "hdgst": ${hdgst:-false}, 00:26:45.241 "ddgst": ${ddgst:-false} 00:26:45.241 }, 00:26:45.241 "method": "bdev_nvme_attach_controller" 00:26:45.241 } 00:26:45.241 EOF 00:26:45.241 )") 00:26:45.241 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:45.241 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:45.241 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:45.241 13:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:45.241 "params": { 00:26:45.241 "name": "Nvme0", 00:26:45.241 "trtype": "tcp", 00:26:45.241 "traddr": "10.0.0.3", 00:26:45.241 "adrfam": "ipv4", 00:26:45.241 "trsvcid": "4420", 00:26:45.241 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:45.241 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:45.241 "hdgst": false, 00:26:45.241 "ddgst": false 00:26:45.241 }, 00:26:45.241 "method": "bdev_nvme_attach_controller" 00:26:45.241 }' 00:26:45.500 [2024-11-06 13:56:26.189668] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:26:45.500 [2024-11-06 13:56:26.189733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102293 ] 00:26:45.500 [2024-11-06 13:56:26.340716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.500 [2024-11-06 13:56:26.399413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.759 Running I/O for 1 seconds... 00:26:46.956 2085.00 IOPS, 130.31 MiB/s 00:26:46.956 Latency(us) 00:26:46.956 [2024-11-06T13:56:27.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.956 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.956 Verification LBA range: start 0x0 length 0x400 00:26:46.956 Nvme0n1 : 1.03 2120.89 132.56 0.00 0.00 29705.34 4605.94 26951.35 00:26:46.956 [2024-11-06T13:56:27.889Z] =================================================================================================================== 00:26:46.956 [2024-11-06T13:56:27.889Z] Total : 2120.89 132.56 0.00 0.00 29705.34 4605.94 26951.35 00:26:47.214 13:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:26:47.214 13:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:26:47.214 13:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:26:47.214 13:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:26:47.214 13:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:26:47.214 13:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:47.214 13:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:26:47.214 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:47.214 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:26:47.214 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:47.214 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:47.214 rmmod nvme_tcp 00:26:47.214 rmmod nvme_fabrics 00:26:47.214 rmmod nvme_keyring 00:26:47.214 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:47.214 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 102171 ']' 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 102171 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 102171 ']' 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 102171 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 102171 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:47.215 killing process with pid 102171 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 102171' 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 102171 00:26:47.215 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 102171 00:26:47.474 [2024-11-06 13:56:28.283999] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:47.474 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:47.734 00:26:47.734 real 0m6.605s 00:26:47.734 user 0m19.001s 00:26:47.734 sys 0m2.788s 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:47.734 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:47.734 ************************************ 00:26:47.734 END TEST nvmf_host_management 00:26:47.734 ************************************ 00:26:47.994 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:26:47.994 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:47.994 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:47.994 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 ************************************ 00:26:47.994 START TEST nvmf_lvol 00:26:47.994 ************************************ 00:26:47.994 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:26:47.994 * Looking for test storage... 00:26:47.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:47.994 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:47.994 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:26:47.994 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:48.255 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:48.255 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.255 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.255 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.255 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:48.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.256 --rc genhtml_branch_coverage=1 00:26:48.256 --rc genhtml_function_coverage=1 00:26:48.256 --rc genhtml_legend=1 00:26:48.256 --rc geninfo_all_blocks=1 00:26:48.256 --rc geninfo_unexecuted_blocks=1 00:26:48.256 00:26:48.256 ' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:48.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.256 --rc genhtml_branch_coverage=1 00:26:48.256 --rc genhtml_function_coverage=1 00:26:48.256 --rc genhtml_legend=1 00:26:48.256 --rc geninfo_all_blocks=1 00:26:48.256 --rc geninfo_unexecuted_blocks=1 00:26:48.256 00:26:48.256 ' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:48.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.256 --rc genhtml_branch_coverage=1 00:26:48.256 --rc genhtml_function_coverage=1 00:26:48.256 --rc genhtml_legend=1 00:26:48.256 --rc geninfo_all_blocks=1 00:26:48.256 --rc geninfo_unexecuted_blocks=1 00:26:48.256 00:26:48.256 ' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:48.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.256 --rc genhtml_branch_coverage=1 00:26:48.256 --rc genhtml_function_coverage=1 00:26:48.256 --rc genhtml_legend=1 00:26:48.256 --rc geninfo_all_blocks=1 00:26:48.256 --rc geninfo_unexecuted_blocks=1 00:26:48.256 00:26:48.256 ' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.256 13:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.256 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:48.256 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:48.256 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:26:48.256 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:26:48.256 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:48.256 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:48.257 Cannot find device "nvmf_init_br" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:48.257 Cannot find device "nvmf_init_br2" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:48.257 Cannot find device "nvmf_tgt_br" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:48.257 Cannot find device "nvmf_tgt_br2" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:48.257 Cannot find device "nvmf_init_br" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:48.257 Cannot find device "nvmf_init_br2" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:48.257 Cannot find device "nvmf_tgt_br" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:48.257 Cannot find device "nvmf_tgt_br2" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:48.257 Cannot find device "nvmf_br" 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:26:48.257 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:48.516 Cannot find device "nvmf_init_if" 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:48.516 Cannot find device "nvmf_init_if2" 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:48.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:48.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:48.516 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:48.517 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:48.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:48.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:26:48.776 00:26:48.776 --- 10.0.0.3 ping statistics --- 00:26:48.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.776 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:48.776 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:48.776 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:26:48.776 00:26:48.776 --- 10.0.0.4 ping statistics --- 00:26:48.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.776 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:48.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:26:48.776 00:26:48.776 --- 10.0.0.1 ping statistics --- 00:26:48.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.776 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:48.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:26:48.776 00:26:48.776 --- 10.0.0.2 ping statistics --- 00:26:48.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.776 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:48.776 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=102565 00:26:48.777 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:26:48.777 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 102565 00:26:48.777 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 102565 ']' 00:26:48.777 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.777 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:48.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.777 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.777 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:48.777 13:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:48.777 [2024-11-06 13:56:29.586441] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:48.777 [2024-11-06 13:56:29.587383] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:26:48.777 [2024-11-06 13:56:29.587432] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.036 [2024-11-06 13:56:29.738132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:49.036 [2024-11-06 13:56:29.795227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.036 [2024-11-06 13:56:29.795280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.036 [2024-11-06 13:56:29.795306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.036 [2024-11-06 13:56:29.795315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.036 [2024-11-06 13:56:29.795322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.036 [2024-11-06 13:56:29.796656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.036 [2024-11-06 13:56:29.796836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.036 [2024-11-06 13:56:29.796838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.036 [2024-11-06 13:56:29.921611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:49.036 [2024-11-06 13:56:29.922475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:49.036 [2024-11-06 13:56:29.922498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:49.036 [2024-11-06 13:56:29.922752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:49.605 13:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:49.605 13:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:26:49.605 13:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:49.605 13:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.605 13:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:49.605 13:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.605 13:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:49.865 [2024-11-06 13:56:30.721965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.865 13:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:50.124 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:26:50.124 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:50.384 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:26:50.384 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:26:50.643 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:26:50.902 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5982b5b2-7ec8-4bf5-acd6-f61dcd36eaac 00:26:50.902 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5982b5b2-7ec8-4bf5-acd6-f61dcd36eaac lvol 20 00:26:51.161 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=379a9305-8fad-42aa-beb2-4e31fb98dec9 00:26:51.161 13:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:51.420 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 379a9305-8fad-42aa-beb2-4e31fb98dec9 00:26:51.679 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:51.938 [2024-11-06 13:56:32.649809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:51.938 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:52.205 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=102702 00:26:52.205 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:26:52.205 13:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:26:53.143 13:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 379a9305-8fad-42aa-beb2-4e31fb98dec9 MY_SNAPSHOT 00:26:53.402 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=94ea36ba-490a-420d-ad5a-faed2fa7829a 00:26:53.402 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 379a9305-8fad-42aa-beb2-4e31fb98dec9 30 00:26:53.661 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 94ea36ba-490a-420d-ad5a-faed2fa7829a MY_CLONE 00:26:53.920 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=78855337-efd3-4a80-8df7-5a0f417bebdd 00:26:53.920 13:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 78855337-efd3-4a80-8df7-5a0f417bebdd 00:26:54.488 13:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 102702 00:27:02.609 Initializing NVMe Controllers 00:27:02.609 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:27:02.609 Controller IO queue size 128, less than required. 00:27:02.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.609 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:02.609 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:02.609 Initialization complete. Launching workers. 00:27:02.609 ======================================================== 00:27:02.609 Latency(us) 00:27:02.609 Device Information : IOPS MiB/s Average min max 00:27:02.609 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12015.10 46.93 10655.52 3273.85 68005.46 00:27:02.609 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12314.60 48.10 10396.39 805.42 122122.14 00:27:02.609 ======================================================== 00:27:02.609 Total : 24329.70 95.04 10524.36 805.42 122122.14 00:27:02.609 00:27:02.609 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:02.609 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 379a9305-8fad-42aa-beb2-4e31fb98dec9 00:27:02.868 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5982b5b2-7ec8-4bf5-acd6-f61dcd36eaac 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:03.127 13:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:03.127 rmmod nvme_tcp 00:27:03.127 rmmod nvme_fabrics 00:27:03.127 rmmod nvme_keyring 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 102565 ']' 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 102565 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 102565 ']' 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 102565 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:03.127 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 102565 00:27:03.387 killing process with pid 102565 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 102565' 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 102565 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 102565 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:03.387 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.648 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:27:03.908 00:27:03.908 real 0m15.881s 00:27:03.908 user 0m52.362s 00:27:03.908 sys 0m8.458s 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:03.908 ************************************ 00:27:03.908 END TEST nvmf_lvol 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:03.908 ************************************ 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:03.908 ************************************ 00:27:03.908 START TEST nvmf_lvs_grow 00:27:03.908 ************************************ 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:03.908 * Looking for test storage... 00:27:03.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:27:03.908 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:04.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.167 --rc genhtml_branch_coverage=1 00:27:04.167 --rc genhtml_function_coverage=1 00:27:04.167 --rc genhtml_legend=1 00:27:04.167 --rc geninfo_all_blocks=1 00:27:04.167 --rc geninfo_unexecuted_blocks=1 00:27:04.167 00:27:04.167 ' 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:04.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.167 --rc genhtml_branch_coverage=1 00:27:04.167 --rc genhtml_function_coverage=1 00:27:04.167 --rc genhtml_legend=1 00:27:04.167 --rc geninfo_all_blocks=1 00:27:04.167 --rc geninfo_unexecuted_blocks=1 00:27:04.167 00:27:04.167 ' 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:04.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.167 --rc genhtml_branch_coverage=1 00:27:04.167 --rc genhtml_function_coverage=1 00:27:04.167 --rc genhtml_legend=1 00:27:04.167 --rc geninfo_all_blocks=1 00:27:04.167 --rc geninfo_unexecuted_blocks=1 00:27:04.167 00:27:04.167 ' 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:04.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.167 --rc genhtml_branch_coverage=1 00:27:04.167 --rc genhtml_function_coverage=1 00:27:04.167 --rc genhtml_legend=1 00:27:04.167 --rc geninfo_all_blocks=1 00:27:04.167 --rc geninfo_unexecuted_blocks=1 00:27:04.167 00:27:04.167 ' 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.167 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:04.168 Cannot find device "nvmf_init_br" 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:27:04.168 13:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:04.168 Cannot find device "nvmf_init_br2" 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:04.168 Cannot find device "nvmf_tgt_br" 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:04.168 Cannot find device "nvmf_tgt_br2" 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:04.168 Cannot find device "nvmf_init_br" 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:04.168 Cannot find device "nvmf_init_br2" 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:27:04.168 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:04.427 Cannot find device "nvmf_tgt_br" 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:04.427 Cannot find device "nvmf_tgt_br2" 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:04.427 Cannot find device "nvmf_br" 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:04.427 Cannot find device "nvmf_init_if" 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:04.427 Cannot find device "nvmf_init_if2" 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:04.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:04.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:04.427 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:04.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:04.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:27:04.686 00:27:04.686 --- 10.0.0.3 ping statistics --- 00:27:04.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.686 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:04.686 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:04.686 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.130 ms 00:27:04.686 00:27:04.686 --- 10.0.0.4 ping statistics --- 00:27:04.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.686 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:04.686 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:04.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:27:04.686 00:27:04.686 --- 10.0.0.1 ping statistics --- 00:27:04.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.686 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:04.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:27:04.687 00:27:04.687 --- 10.0.0.2 ping statistics --- 00:27:04.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.687 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=103127 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 103127 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 103127 ']' 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:04.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:04.687 13:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:04.687 [2024-11-06 13:56:45.592984] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:04.687 [2024-11-06 13:56:45.594016] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:04.687 [2024-11-06 13:56:45.594071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.945 [2024-11-06 13:56:45.748317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.945 [2024-11-06 13:56:45.786720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.945 [2024-11-06 13:56:45.786767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.945 [2024-11-06 13:56:45.786777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.945 [2024-11-06 13:56:45.786786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.945 [2024-11-06 13:56:45.786793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.945 [2024-11-06 13:56:45.787076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.945 [2024-11-06 13:56:45.858666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:04.945 [2024-11-06 13:56:45.858957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:05.882 [2024-11-06 13:56:46.723991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:05.882 ************************************ 00:27:05.882 START TEST lvs_grow_clean 00:27:05.882 ************************************ 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:05.882 13:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:06.141 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:06.141 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:06.401 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:06.401 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:06.401 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:06.660 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:06.660 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:06.660 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1711ff39-4673-4935-a161-9b0d85b2dfce lvol 150 00:27:06.918 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bdcc0661-418d-4dae-84bc-8152cc93fa47 00:27:06.918 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:06.918 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:07.178 [2024-11-06 13:56:47.903675] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:07.178 [2024-11-06 13:56:47.903848] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:07.178 true 00:27:07.178 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:07.178 13:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:07.436 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:07.436 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:07.695 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdcc0661-418d-4dae-84bc-8152cc93fa47 00:27:07.695 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:07.955 [2024-11-06 13:56:48.780311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:07.955 13:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103282 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103282 /var/tmp/bdevperf.sock 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 103282 ']' 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:08.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:08.215 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:08.215 [2024-11-06 13:56:49.078506] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:08.215 [2024-11-06 13:56:49.079154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103282 ] 00:27:08.472 [2024-11-06 13:56:49.232302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.472 [2024-11-06 13:56:49.272082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.039 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:09.039 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:27:09.039 13:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:09.297 Nvme0n1 00:27:09.665 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:09.665 [ 00:27:09.665 { 00:27:09.665 "aliases": [ 00:27:09.665 "bdcc0661-418d-4dae-84bc-8152cc93fa47" 00:27:09.665 ], 00:27:09.665 "assigned_rate_limits": { 00:27:09.665 "r_mbytes_per_sec": 0, 00:27:09.665 "rw_ios_per_sec": 0, 00:27:09.665 "rw_mbytes_per_sec": 0, 00:27:09.665 "w_mbytes_per_sec": 0 00:27:09.665 }, 00:27:09.665 "block_size": 4096, 00:27:09.665 "claimed": false, 00:27:09.665 "driver_specific": { 00:27:09.665 "mp_policy": "active_passive", 00:27:09.665 "nvme": [ 00:27:09.665 { 00:27:09.665 "ctrlr_data": { 00:27:09.665 "ana_reporting": false, 00:27:09.665 "cntlid": 1, 00:27:09.665 "firmware_revision": "25.01", 00:27:09.665 "model_number": "SPDK bdev Controller", 00:27:09.665 "multi_ctrlr": true, 00:27:09.665 "oacs": { 00:27:09.665 "firmware": 0, 00:27:09.665 "format": 0, 00:27:09.665 "ns_manage": 0, 00:27:09.665 "security": 0 00:27:09.665 }, 00:27:09.665 "serial_number": "SPDK0", 00:27:09.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:09.665 "vendor_id": "0x8086" 00:27:09.665 }, 00:27:09.665 "ns_data": { 00:27:09.665 "can_share": true, 00:27:09.665 "id": 1 00:27:09.665 }, 00:27:09.665 "trid": { 00:27:09.665 "adrfam": "IPv4", 00:27:09.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:09.665 "traddr": "10.0.0.3", 00:27:09.665 "trsvcid": "4420", 00:27:09.665 "trtype": "TCP" 00:27:09.665 }, 00:27:09.665 "vs": { 00:27:09.665 "nvme_version": "1.3" 00:27:09.665 } 00:27:09.665 } 00:27:09.665 ] 00:27:09.665 }, 00:27:09.665 "memory_domains": [ 00:27:09.665 { 00:27:09.665 "dma_device_id": "system", 00:27:09.665 "dma_device_type": 1 00:27:09.665 } 00:27:09.665 ], 00:27:09.665 "name": "Nvme0n1", 00:27:09.665 "num_blocks": 38912, 00:27:09.665 "numa_id": -1, 00:27:09.665 "product_name": "NVMe disk", 00:27:09.665 "supported_io_types": { 00:27:09.665 "abort": true, 00:27:09.665 "compare": true, 00:27:09.665 "compare_and_write": true, 00:27:09.665 "copy": true, 00:27:09.665 "flush": true, 00:27:09.665 "get_zone_info": false, 00:27:09.665 "nvme_admin": true, 00:27:09.665 "nvme_io": true, 00:27:09.665 "nvme_io_md": false, 00:27:09.665 "nvme_iov_md": false, 00:27:09.665 "read": true, 00:27:09.665 "reset": true, 00:27:09.665 "seek_data": false, 00:27:09.665 "seek_hole": false, 00:27:09.665 "unmap": true, 00:27:09.665 "write": true, 00:27:09.665 "write_zeroes": true, 00:27:09.665 "zcopy": false, 00:27:09.665 "zone_append": false, 00:27:09.665 "zone_management": false 00:27:09.665 }, 00:27:09.665 "uuid": "bdcc0661-418d-4dae-84bc-8152cc93fa47", 00:27:09.665 "zoned": false 00:27:09.665 } 00:27:09.665 ] 00:27:09.665 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103324 00:27:09.665 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:09.665 13:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:09.665 Running I/O for 10 seconds... 00:27:11.045 Latency(us) 00:27:11.045 [2024-11-06T13:56:51.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:11.045 Nvme0n1 : 1.00 10091.00 39.42 0.00 0.00 0.00 0.00 0.00 00:27:11.045 [2024-11-06T13:56:51.978Z] =================================================================================================================== 00:27:11.045 [2024-11-06T13:56:51.978Z] Total : 10091.00 39.42 0.00 0.00 0.00 0.00 0.00 00:27:11.045 00:27:11.615 13:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:11.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:11.875 Nvme0n1 : 2.00 10509.00 41.05 0.00 0.00 0.00 0.00 0.00 00:27:11.875 [2024-11-06T13:56:52.808Z] =================================================================================================================== 00:27:11.875 [2024-11-06T13:56:52.808Z] Total : 10509.00 41.05 0.00 0.00 0.00 0.00 0.00 00:27:11.875 00:27:11.875 true 00:27:11.875 13:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:11.875 13:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:12.135 13:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:12.135 13:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:12.135 13:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 103324 00:27:12.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:12.704 Nvme0n1 : 3.00 10594.33 41.38 0.00 0.00 0.00 0.00 0.00 00:27:12.704 [2024-11-06T13:56:53.637Z] =================================================================================================================== 00:27:12.704 [2024-11-06T13:56:53.637Z] Total : 10594.33 41.38 0.00 0.00 0.00 0.00 0.00 00:27:12.704 00:27:13.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:13.642 Nvme0n1 : 4.00 10603.25 41.42 0.00 0.00 0.00 0.00 0.00 00:27:13.642 [2024-11-06T13:56:54.575Z] =================================================================================================================== 00:27:13.642 [2024-11-06T13:56:54.575Z] Total : 10603.25 41.42 0.00 0.00 0.00 0.00 0.00 00:27:13.642 00:27:15.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:15.025 Nvme0n1 : 5.00 10594.60 41.39 0.00 0.00 0.00 0.00 0.00 00:27:15.025 [2024-11-06T13:56:55.958Z] =================================================================================================================== 00:27:15.025 [2024-11-06T13:56:55.958Z] Total : 10594.60 41.39 0.00 0.00 0.00 0.00 0.00 00:27:15.025 00:27:15.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:15.963 Nvme0n1 : 6.00 10573.67 41.30 0.00 0.00 0.00 0.00 0.00 00:27:15.963 [2024-11-06T13:56:56.896Z] =================================================================================================================== 00:27:15.963 [2024-11-06T13:56:56.896Z] Total : 10573.67 41.30 0.00 0.00 0.00 0.00 0.00 00:27:15.963 00:27:16.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:16.900 Nvme0n1 : 7.00 10458.43 40.85 0.00 0.00 0.00 0.00 0.00 00:27:16.900 [2024-11-06T13:56:57.833Z] =================================================================================================================== 00:27:16.900 [2024-11-06T13:56:57.833Z] Total : 10458.43 40.85 0.00 0.00 0.00 0.00 0.00 00:27:16.900 00:27:17.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:17.839 Nvme0n1 : 8.00 10404.00 40.64 0.00 0.00 0.00 0.00 0.00 00:27:17.839 [2024-11-06T13:56:58.772Z] =================================================================================================================== 00:27:17.839 [2024-11-06T13:56:58.772Z] Total : 10404.00 40.64 0.00 0.00 0.00 0.00 0.00 00:27:17.839 00:27:18.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.776 Nvme0n1 : 9.00 10350.22 40.43 0.00 0.00 0.00 0.00 0.00 00:27:18.776 [2024-11-06T13:56:59.709Z] =================================================================================================================== 00:27:18.776 [2024-11-06T13:56:59.709Z] Total : 10350.22 40.43 0.00 0.00 0.00 0.00 0.00 00:27:18.776 00:27:19.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:19.711 Nvme0n1 : 10.00 10317.70 40.30 0.00 0.00 0.00 0.00 0.00 00:27:19.711 [2024-11-06T13:57:00.644Z] =================================================================================================================== 00:27:19.711 [2024-11-06T13:57:00.644Z] Total : 10317.70 40.30 0.00 0.00 0.00 0.00 0.00 00:27:19.711 00:27:19.711 00:27:19.711 Latency(us) 00:27:19.711 [2024-11-06T13:57:00.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:19.711 Nvme0n1 : 10.01 10325.18 40.33 0.00 0.00 12393.72 5553.45 38742.57 00:27:19.711 [2024-11-06T13:57:00.644Z] =================================================================================================================== 00:27:19.711 [2024-11-06T13:57:00.644Z] Total : 10325.18 40.33 0.00 0.00 12393.72 5553.45 38742.57 00:27:19.711 { 00:27:19.711 "results": [ 00:27:19.711 { 00:27:19.711 "job": "Nvme0n1", 00:27:19.711 "core_mask": "0x2", 00:27:19.711 "workload": "randwrite", 00:27:19.711 "status": "finished", 00:27:19.711 "queue_depth": 128, 00:27:19.711 "io_size": 4096, 00:27:19.711 "runtime": 10.005154, 00:27:19.711 "iops": 10325.178403051068, 00:27:19.711 "mibps": 40.332728136918234, 00:27:19.712 "io_failed": 0, 00:27:19.712 "io_timeout": 0, 00:27:19.712 "avg_latency_us": 12393.717216936087, 00:27:19.712 "min_latency_us": 5553.4522088353415, 00:27:19.712 "max_latency_us": 38742.56706827309 00:27:19.712 } 00:27:19.712 ], 00:27:19.712 "core_count": 1 00:27:19.712 } 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103282 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 103282 ']' 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 103282 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103282 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103282' 00:27:19.712 killing process with pid 103282 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 103282 00:27:19.712 Received shutdown signal, test time was about 10.000000 seconds 00:27:19.712 00:27:19.712 Latency(us) 00:27:19.712 [2024-11-06T13:57:00.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.712 [2024-11-06T13:57:00.645Z] =================================================================================================================== 00:27:19.712 [2024-11-06T13:57:00.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.712 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 103282 00:27:19.972 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:20.231 13:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:20.490 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:20.490 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:20.750 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:20.750 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:27:20.750 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:20.750 [2024-11-06 13:57:01.651756] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:21.009 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:21.009 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:21.010 2024/11/06 13:57:01 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:1711ff39-4673-4935-a161-9b0d85b2dfce], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:27:21.010 request: 00:27:21.010 { 00:27:21.010 "method": "bdev_lvol_get_lvstores", 00:27:21.010 "params": { 00:27:21.010 "uuid": "1711ff39-4673-4935-a161-9b0d85b2dfce" 00:27:21.010 } 00:27:21.010 } 00:27:21.010 Got JSON-RPC error response 00:27:21.010 GoRPCClient: error on JSON-RPC call 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:21.010 13:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:21.269 aio_bdev 00:27:21.269 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bdcc0661-418d-4dae-84bc-8152cc93fa47 00:27:21.269 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=bdcc0661-418d-4dae-84bc-8152cc93fa47 00:27:21.270 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:21.270 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:27:21.270 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:21.270 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:21.270 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:21.529 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bdcc0661-418d-4dae-84bc-8152cc93fa47 -t 2000 00:27:21.788 [ 00:27:21.788 { 00:27:21.788 "aliases": [ 00:27:21.788 "lvs/lvol" 00:27:21.788 ], 00:27:21.788 "assigned_rate_limits": { 00:27:21.788 "r_mbytes_per_sec": 0, 00:27:21.788 "rw_ios_per_sec": 0, 00:27:21.788 "rw_mbytes_per_sec": 0, 00:27:21.788 "w_mbytes_per_sec": 0 00:27:21.788 }, 00:27:21.788 "block_size": 4096, 00:27:21.788 "claimed": false, 00:27:21.788 "driver_specific": { 00:27:21.788 "lvol": { 00:27:21.788 "base_bdev": "aio_bdev", 00:27:21.788 "clone": false, 00:27:21.788 "esnap_clone": false, 00:27:21.788 "lvol_store_uuid": "1711ff39-4673-4935-a161-9b0d85b2dfce", 00:27:21.788 "num_allocated_clusters": 38, 00:27:21.788 "snapshot": false, 00:27:21.788 "thin_provision": false 00:27:21.788 } 00:27:21.788 }, 00:27:21.788 "name": "bdcc0661-418d-4dae-84bc-8152cc93fa47", 00:27:21.788 "num_blocks": 38912, 00:27:21.788 "product_name": "Logical Volume", 00:27:21.788 "supported_io_types": { 00:27:21.788 "abort": false, 00:27:21.788 "compare": false, 00:27:21.788 "compare_and_write": false, 00:27:21.788 "copy": false, 00:27:21.788 "flush": false, 00:27:21.788 "get_zone_info": false, 00:27:21.788 "nvme_admin": false, 00:27:21.788 "nvme_io": false, 00:27:21.789 "nvme_io_md": false, 00:27:21.789 "nvme_iov_md": false, 00:27:21.789 "read": true, 00:27:21.789 "reset": true, 00:27:21.789 "seek_data": true, 00:27:21.789 "seek_hole": true, 00:27:21.789 "unmap": true, 00:27:21.789 "write": true, 00:27:21.789 "write_zeroes": true, 00:27:21.789 "zcopy": false, 00:27:21.789 "zone_append": false, 00:27:21.789 "zone_management": false 00:27:21.789 }, 00:27:21.789 "uuid": "bdcc0661-418d-4dae-84bc-8152cc93fa47", 00:27:21.789 "zoned": false 00:27:21.789 } 00:27:21.789 ] 00:27:21.789 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:27:21.789 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:21.789 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:22.048 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:22.048 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:22.048 13:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:22.307 13:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:22.307 13:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bdcc0661-418d-4dae-84bc-8152cc93fa47 00:27:22.566 13:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1711ff39-4673-4935-a161-9b0d85b2dfce 00:27:22.826 13:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:22.826 13:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:23.395 00:27:23.395 real 0m17.445s 00:27:23.395 user 0m15.629s 00:27:23.395 sys 0m3.089s 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:23.395 ************************************ 00:27:23.395 END TEST lvs_grow_clean 00:27:23.395 ************************************ 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:23.395 ************************************ 00:27:23.395 START TEST lvs_grow_dirty 00:27:23.395 ************************************ 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:23.395 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:23.654 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:23.654 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:23.914 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:23.914 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:23.914 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:24.174 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:24.174 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:24.174 13:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 lvol 150 00:27:24.434 13:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aebed6d6-b234-4a51-9370-ec6bd5c0ce6e 00:27:24.434 13:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:24.434 13:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:24.694 [2024-11-06 13:57:05.411647] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:24.694 [2024-11-06 13:57:05.411823] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:24.694 true 00:27:24.694 13:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:24.694 13:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:24.953 13:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:24.953 13:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:24.953 13:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aebed6d6-b234-4a51-9370-ec6bd5c0ce6e 00:27:25.212 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:25.471 [2024-11-06 13:57:06.276183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:25.471 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103706 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103706 /var/tmp/bdevperf.sock 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 103706 ']' 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:25.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:25.731 13:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 [2024-11-06 13:57:06.589522] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:25.731 [2024-11-06 13:57:06.589602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103706 ] 00:27:25.990 [2024-11-06 13:57:06.742750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.990 [2024-11-06 13:57:06.807873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.927 13:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:26.927 13:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:27:26.927 13:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:26.927 Nvme0n1 00:27:26.927 13:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:27.186 [ 00:27:27.186 { 00:27:27.186 "aliases": [ 00:27:27.186 "aebed6d6-b234-4a51-9370-ec6bd5c0ce6e" 00:27:27.186 ], 00:27:27.186 "assigned_rate_limits": { 00:27:27.186 "r_mbytes_per_sec": 0, 00:27:27.186 "rw_ios_per_sec": 0, 00:27:27.186 "rw_mbytes_per_sec": 0, 00:27:27.186 "w_mbytes_per_sec": 0 00:27:27.186 }, 00:27:27.186 "block_size": 4096, 00:27:27.186 "claimed": false, 00:27:27.186 "driver_specific": { 00:27:27.186 "mp_policy": "active_passive", 00:27:27.186 "nvme": [ 00:27:27.186 { 00:27:27.186 "ctrlr_data": { 00:27:27.186 "ana_reporting": false, 00:27:27.186 "cntlid": 1, 00:27:27.186 "firmware_revision": "25.01", 00:27:27.186 "model_number": "SPDK bdev Controller", 00:27:27.186 "multi_ctrlr": true, 00:27:27.186 "oacs": { 00:27:27.186 "firmware": 0, 00:27:27.186 "format": 0, 00:27:27.186 "ns_manage": 0, 00:27:27.186 "security": 0 00:27:27.186 }, 00:27:27.186 "serial_number": "SPDK0", 00:27:27.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:27.186 "vendor_id": "0x8086" 00:27:27.187 }, 00:27:27.187 "ns_data": { 00:27:27.187 "can_share": true, 00:27:27.187 "id": 1 00:27:27.187 }, 00:27:27.187 "trid": { 00:27:27.187 "adrfam": "IPv4", 00:27:27.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:27.187 "traddr": "10.0.0.3", 00:27:27.187 "trsvcid": "4420", 00:27:27.187 "trtype": "TCP" 00:27:27.187 }, 00:27:27.187 "vs": { 00:27:27.187 "nvme_version": "1.3" 00:27:27.187 } 00:27:27.187 } 00:27:27.187 ] 00:27:27.187 }, 00:27:27.187 "memory_domains": [ 00:27:27.187 { 00:27:27.187 "dma_device_id": "system", 00:27:27.187 "dma_device_type": 1 00:27:27.187 } 00:27:27.187 ], 00:27:27.187 "name": "Nvme0n1", 00:27:27.187 "num_blocks": 38912, 00:27:27.187 "numa_id": -1, 00:27:27.187 "product_name": "NVMe disk", 00:27:27.187 "supported_io_types": { 00:27:27.187 "abort": true, 00:27:27.187 "compare": true, 00:27:27.187 "compare_and_write": true, 00:27:27.187 "copy": true, 00:27:27.187 "flush": true, 00:27:27.187 "get_zone_info": false, 00:27:27.187 "nvme_admin": true, 00:27:27.187 "nvme_io": true, 00:27:27.187 "nvme_io_md": false, 00:27:27.187 "nvme_iov_md": false, 00:27:27.187 "read": true, 00:27:27.187 "reset": true, 00:27:27.187 "seek_data": false, 00:27:27.187 "seek_hole": false, 00:27:27.187 "unmap": true, 00:27:27.187 "write": true, 00:27:27.187 "write_zeroes": true, 00:27:27.187 "zcopy": false, 00:27:27.187 "zone_append": false, 00:27:27.187 "zone_management": false 00:27:27.187 }, 00:27:27.187 "uuid": "aebed6d6-b234-4a51-9370-ec6bd5c0ce6e", 00:27:27.187 "zoned": false 00:27:27.187 } 00:27:27.187 ] 00:27:27.187 13:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:27.187 13:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103753 00:27:27.187 13:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:27.187 Running I/O for 10 seconds... 00:27:28.589 Latency(us) 00:27:28.589 [2024-11-06T13:57:09.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:28.589 Nvme0n1 : 1.00 11890.00 46.45 0.00 0.00 0.00 0.00 0.00 00:27:28.589 [2024-11-06T13:57:09.523Z] =================================================================================================================== 00:27:28.590 [2024-11-06T13:57:09.523Z] Total : 11890.00 46.45 0.00 0.00 0.00 0.00 0.00 00:27:28.590 00:27:29.159 13:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:29.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:29.159 Nvme0n1 : 2.00 11594.50 45.29 0.00 0.00 0.00 0.00 0.00 00:27:29.159 [2024-11-06T13:57:10.092Z] =================================================================================================================== 00:27:29.159 [2024-11-06T13:57:10.092Z] Total : 11594.50 45.29 0.00 0.00 0.00 0.00 0.00 00:27:29.160 00:27:29.419 true 00:27:29.419 13:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:29.419 13:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:29.678 13:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:29.678 13:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:29.678 13:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 103753 00:27:30.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:30.246 Nvme0n1 : 3.00 11380.00 44.45 0.00 0.00 0.00 0.00 0.00 00:27:30.246 [2024-11-06T13:57:11.179Z] =================================================================================================================== 00:27:30.246 [2024-11-06T13:57:11.179Z] Total : 11380.00 44.45 0.00 0.00 0.00 0.00 0.00 00:27:30.246 00:27:31.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:31.184 Nvme0n1 : 4.00 11248.50 43.94 0.00 0.00 0.00 0.00 0.00 00:27:31.184 [2024-11-06T13:57:12.117Z] =================================================================================================================== 00:27:31.184 [2024-11-06T13:57:12.117Z] Total : 11248.50 43.94 0.00 0.00 0.00 0.00 0.00 00:27:31.184 00:27:32.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:32.563 Nvme0n1 : 5.00 11007.20 43.00 0.00 0.00 0.00 0.00 0.00 00:27:32.563 [2024-11-06T13:57:13.496Z] =================================================================================================================== 00:27:32.563 [2024-11-06T13:57:13.496Z] Total : 11007.20 43.00 0.00 0.00 0.00 0.00 0.00 00:27:32.563 00:27:33.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:33.500 Nvme0n1 : 6.00 10925.83 42.68 0.00 0.00 0.00 0.00 0.00 00:27:33.500 [2024-11-06T13:57:14.433Z] =================================================================================================================== 00:27:33.500 [2024-11-06T13:57:14.433Z] Total : 10925.83 42.68 0.00 0.00 0.00 0.00 0.00 00:27:33.500 00:27:34.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:34.437 Nvme0n1 : 7.00 10871.71 42.47 0.00 0.00 0.00 0.00 0.00 00:27:34.437 [2024-11-06T13:57:15.370Z] =================================================================================================================== 00:27:34.437 [2024-11-06T13:57:15.370Z] Total : 10871.71 42.47 0.00 0.00 0.00 0.00 0.00 00:27:34.437 00:27:35.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:35.375 Nvme0n1 : 8.00 10544.88 41.19 0.00 0.00 0.00 0.00 0.00 00:27:35.375 [2024-11-06T13:57:16.308Z] =================================================================================================================== 00:27:35.375 [2024-11-06T13:57:16.308Z] Total : 10544.88 41.19 0.00 0.00 0.00 0.00 0.00 00:27:35.375 00:27:36.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:36.341 Nvme0n1 : 9.00 10471.22 40.90 0.00 0.00 0.00 0.00 0.00 00:27:36.341 [2024-11-06T13:57:17.274Z] =================================================================================================================== 00:27:36.341 [2024-11-06T13:57:17.274Z] Total : 10471.22 40.90 0.00 0.00 0.00 0.00 0.00 00:27:36.341 00:27:37.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:37.303 Nvme0n1 : 10.00 10396.00 40.61 0.00 0.00 0.00 0.00 0.00 00:27:37.303 [2024-11-06T13:57:18.236Z] =================================================================================================================== 00:27:37.303 [2024-11-06T13:57:18.236Z] Total : 10396.00 40.61 0.00 0.00 0.00 0.00 0.00 00:27:37.303 00:27:37.303 00:27:37.303 Latency(us) 00:27:37.303 [2024-11-06T13:57:18.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:37.303 Nvme0n1 : 10.00 10394.83 40.60 0.00 0.00 12309.49 4842.82 235824.32 00:27:37.303 [2024-11-06T13:57:18.236Z] =================================================================================================================== 00:27:37.303 [2024-11-06T13:57:18.236Z] Total : 10394.83 40.60 0.00 0.00 12309.49 4842.82 235824.32 00:27:37.303 { 00:27:37.303 "results": [ 00:27:37.303 { 00:27:37.303 "job": "Nvme0n1", 00:27:37.303 "core_mask": "0x2", 00:27:37.303 "workload": "randwrite", 00:27:37.303 "status": "finished", 00:27:37.303 "queue_depth": 128, 00:27:37.303 "io_size": 4096, 00:27:37.303 "runtime": 10.002566, 00:27:37.303 "iops": 10394.83268593279, 00:27:37.303 "mibps": 40.60481517942496, 00:27:37.303 "io_failed": 0, 00:27:37.303 "io_timeout": 0, 00:27:37.303 "avg_latency_us": 12309.494520303866, 00:27:37.303 "min_latency_us": 4842.820883534137, 00:27:37.303 "max_latency_us": 235824.32128514056 00:27:37.303 } 00:27:37.303 ], 00:27:37.303 "core_count": 1 00:27:37.303 } 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103706 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 103706 ']' 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 103706 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103706 00:27:37.303 killing process with pid 103706 00:27:37.303 Received shutdown signal, test time was about 10.000000 seconds 00:27:37.303 00:27:37.303 Latency(us) 00:27:37.303 [2024-11-06T13:57:18.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.303 [2024-11-06T13:57:18.236Z] =================================================================================================================== 00:27:37.303 [2024-11-06T13:57:18.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103706' 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 103706 00:27:37.303 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 103706 00:27:37.562 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:37.821 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:38.080 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:38.080 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:38.080 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:38.081 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:27:38.081 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 103127 00:27:38.081 13:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 103127 00:27:38.340 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 103127 Killed "${NVMF_APP[@]}" "$@" 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=103911 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 103911 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 103911 ']' 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:38.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.340 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:38.341 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:38.341 [2024-11-06 13:57:19.095760] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:38.341 [2024-11-06 13:57:19.096722] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:38.341 [2024-11-06 13:57:19.096790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.341 [2024-11-06 13:57:19.251075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.600 [2024-11-06 13:57:19.320060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.600 [2024-11-06 13:57:19.320117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.600 [2024-11-06 13:57:19.320127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.600 [2024-11-06 13:57:19.320136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.600 [2024-11-06 13:57:19.320143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.600 [2024-11-06 13:57:19.320531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.600 [2024-11-06 13:57:19.447947] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:38.600 [2024-11-06 13:57:19.448255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:39.168 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:39.168 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:27:39.168 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:39.168 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:39.168 13:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:39.168 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.168 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:39.427 [2024-11-06 13:57:20.260505] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:39.427 [2024-11-06 13:57:20.261147] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:39.427 [2024-11-06 13:57:20.261492] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:39.427 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:27:39.427 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aebed6d6-b234-4a51-9370-ec6bd5c0ce6e 00:27:39.427 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=aebed6d6-b234-4a51-9370-ec6bd5c0ce6e 00:27:39.427 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:39.427 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:27:39.427 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:39.427 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:39.427 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:39.686 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aebed6d6-b234-4a51-9370-ec6bd5c0ce6e -t 2000 00:27:39.946 [ 00:27:39.946 { 00:27:39.946 "aliases": [ 00:27:39.946 "lvs/lvol" 00:27:39.946 ], 00:27:39.946 "assigned_rate_limits": { 00:27:39.946 "r_mbytes_per_sec": 0, 00:27:39.946 "rw_ios_per_sec": 0, 00:27:39.946 "rw_mbytes_per_sec": 0, 00:27:39.946 "w_mbytes_per_sec": 0 00:27:39.946 }, 00:27:39.946 "block_size": 4096, 00:27:39.946 "claimed": false, 00:27:39.946 "driver_specific": { 00:27:39.946 "lvol": { 00:27:39.946 "base_bdev": "aio_bdev", 00:27:39.946 "clone": false, 00:27:39.946 "esnap_clone": false, 00:27:39.946 "lvol_store_uuid": "c92b4a99-d2ca-47a8-8ec9-d9a56cf42924", 00:27:39.946 "num_allocated_clusters": 38, 00:27:39.946 "snapshot": false, 00:27:39.946 "thin_provision": false 00:27:39.946 } 00:27:39.946 }, 00:27:39.946 "name": "aebed6d6-b234-4a51-9370-ec6bd5c0ce6e", 00:27:39.946 "num_blocks": 38912, 00:27:39.946 "product_name": "Logical Volume", 00:27:39.946 "supported_io_types": { 00:27:39.946 "abort": false, 00:27:39.946 "compare": false, 00:27:39.946 "compare_and_write": false, 00:27:39.946 "copy": false, 00:27:39.946 "flush": false, 00:27:39.946 "get_zone_info": false, 00:27:39.946 "nvme_admin": false, 00:27:39.946 "nvme_io": false, 00:27:39.946 "nvme_io_md": false, 00:27:39.946 "nvme_iov_md": false, 00:27:39.946 "read": true, 00:27:39.946 "reset": true, 00:27:39.946 "seek_data": true, 00:27:39.946 "seek_hole": true, 00:27:39.946 "unmap": true, 00:27:39.946 "write": true, 00:27:39.946 "write_zeroes": true, 00:27:39.946 "zcopy": false, 00:27:39.946 "zone_append": false, 00:27:39.946 "zone_management": false 00:27:39.946 }, 00:27:39.946 "uuid": "aebed6d6-b234-4a51-9370-ec6bd5c0ce6e", 00:27:39.946 "zoned": false 00:27:39.946 } 00:27:39.946 ] 00:27:39.946 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:27:39.946 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:27:39.946 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:40.206 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:27:40.206 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:40.206 13:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:27:40.466 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:27:40.466 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:40.726 [2024-11-06 13:57:21.417367] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:40.726 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:40.986 2024/11/06 13:57:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c92b4a99-d2ca-47a8-8ec9-d9a56cf42924], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:27:40.986 request: 00:27:40.986 { 00:27:40.986 "method": "bdev_lvol_get_lvstores", 00:27:40.986 "params": { 00:27:40.986 "uuid": "c92b4a99-d2ca-47a8-8ec9-d9a56cf42924" 00:27:40.986 } 00:27:40.986 } 00:27:40.986 Got JSON-RPC error response 00:27:40.986 GoRPCClient: error on JSON-RPC call 00:27:40.986 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:27:40.986 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:40.986 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:40.986 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:40.986 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:40.986 aio_bdev 00:27:41.246 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aebed6d6-b234-4a51-9370-ec6bd5c0ce6e 00:27:41.246 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=aebed6d6-b234-4a51-9370-ec6bd5c0ce6e 00:27:41.246 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:41.246 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:27:41.246 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:41.246 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:41.246 13:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:41.246 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aebed6d6-b234-4a51-9370-ec6bd5c0ce6e -t 2000 00:27:41.506 [ 00:27:41.506 { 00:27:41.506 "aliases": [ 00:27:41.506 "lvs/lvol" 00:27:41.506 ], 00:27:41.506 "assigned_rate_limits": { 00:27:41.506 "r_mbytes_per_sec": 0, 00:27:41.506 "rw_ios_per_sec": 0, 00:27:41.506 "rw_mbytes_per_sec": 0, 00:27:41.506 "w_mbytes_per_sec": 0 00:27:41.506 }, 00:27:41.506 "block_size": 4096, 00:27:41.506 "claimed": false, 00:27:41.506 "driver_specific": { 00:27:41.506 "lvol": { 00:27:41.506 "base_bdev": "aio_bdev", 00:27:41.506 "clone": false, 00:27:41.506 "esnap_clone": false, 00:27:41.506 "lvol_store_uuid": "c92b4a99-d2ca-47a8-8ec9-d9a56cf42924", 00:27:41.506 "num_allocated_clusters": 38, 00:27:41.506 "snapshot": false, 00:27:41.506 "thin_provision": false 00:27:41.506 } 00:27:41.506 }, 00:27:41.506 "name": "aebed6d6-b234-4a51-9370-ec6bd5c0ce6e", 00:27:41.506 "num_blocks": 38912, 00:27:41.506 "product_name": "Logical Volume", 00:27:41.506 "supported_io_types": { 00:27:41.506 "abort": false, 00:27:41.506 "compare": false, 00:27:41.506 "compare_and_write": false, 00:27:41.506 "copy": false, 00:27:41.506 "flush": false, 00:27:41.506 "get_zone_info": false, 00:27:41.506 "nvme_admin": false, 00:27:41.506 "nvme_io": false, 00:27:41.506 "nvme_io_md": false, 00:27:41.506 "nvme_iov_md": false, 00:27:41.506 "read": true, 00:27:41.506 "reset": true, 00:27:41.506 "seek_data": true, 00:27:41.506 "seek_hole": true, 00:27:41.506 "unmap": true, 00:27:41.506 "write": true, 00:27:41.506 "write_zeroes": true, 00:27:41.506 "zcopy": false, 00:27:41.506 "zone_append": false, 00:27:41.506 "zone_management": false 00:27:41.506 }, 00:27:41.506 "uuid": "aebed6d6-b234-4a51-9370-ec6bd5c0ce6e", 00:27:41.506 "zoned": false 00:27:41.506 } 00:27:41.506 ] 00:27:41.506 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:27:41.506 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:41.506 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:41.765 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:41.765 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:41.765 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:42.025 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:42.025 13:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete aebed6d6-b234-4a51-9370-ec6bd5c0ce6e 00:27:42.285 13:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c92b4a99-d2ca-47a8-8ec9-d9a56cf42924 00:27:42.544 13:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:42.803 13:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:43.370 00:27:43.370 real 0m19.760s 00:27:43.370 user 0m25.497s 00:27:43.370 sys 0m8.190s 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:43.370 ************************************ 00:27:43.370 END TEST lvs_grow_dirty 00:27:43.370 ************************************ 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:43.370 nvmf_trace.0 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:43.370 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:27:43.963 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:43.963 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:43.964 rmmod nvme_tcp 00:27:43.964 rmmod nvme_fabrics 00:27:43.964 rmmod nvme_keyring 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 103911 ']' 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 103911 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 103911 ']' 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 103911 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103911 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:43.964 killing process with pid 103911 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103911' 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 103911 00:27:43.964 13:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 103911 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:44.223 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:27:44.483 ************************************ 00:27:44.483 END TEST nvmf_lvs_grow 00:27:44.483 ************************************ 00:27:44.483 00:27:44.483 real 0m40.654s 00:27:44.483 user 0m42.545s 00:27:44.483 sys 0m12.784s 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:44.483 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:44.743 ************************************ 00:27:44.743 START TEST nvmf_bdev_io_wait 00:27:44.743 ************************************ 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:27:44.743 * Looking for test storage... 00:27:44.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:44.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.743 --rc genhtml_branch_coverage=1 00:27:44.743 --rc genhtml_function_coverage=1 00:27:44.743 --rc genhtml_legend=1 00:27:44.743 --rc geninfo_all_blocks=1 00:27:44.743 --rc geninfo_unexecuted_blocks=1 00:27:44.743 00:27:44.743 ' 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:44.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.743 --rc genhtml_branch_coverage=1 00:27:44.743 --rc genhtml_function_coverage=1 00:27:44.743 --rc genhtml_legend=1 00:27:44.743 --rc geninfo_all_blocks=1 00:27:44.743 --rc geninfo_unexecuted_blocks=1 00:27:44.743 00:27:44.743 ' 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:44.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.743 --rc genhtml_branch_coverage=1 00:27:44.743 --rc genhtml_function_coverage=1 00:27:44.743 --rc genhtml_legend=1 00:27:44.743 --rc geninfo_all_blocks=1 00:27:44.743 --rc geninfo_unexecuted_blocks=1 00:27:44.743 00:27:44.743 ' 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:44.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.743 --rc genhtml_branch_coverage=1 00:27:44.743 --rc genhtml_function_coverage=1 00:27:44.743 --rc genhtml_legend=1 00:27:44.743 --rc geninfo_all_blocks=1 00:27:44.743 --rc geninfo_unexecuted_blocks=1 00:27:44.743 00:27:44.743 ' 00:27:44.743 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.004 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:45.005 Cannot find device "nvmf_init_br" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:45.005 Cannot find device "nvmf_init_br2" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:45.005 Cannot find device "nvmf_tgt_br" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:45.005 Cannot find device "nvmf_tgt_br2" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:45.005 Cannot find device "nvmf_init_br" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:45.005 Cannot find device "nvmf_init_br2" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:45.005 Cannot find device "nvmf_tgt_br" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:45.005 Cannot find device "nvmf_tgt_br2" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:45.005 Cannot find device "nvmf_br" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:45.005 Cannot find device "nvmf_init_if" 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:27:45.005 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:45.264 Cannot find device "nvmf_init_if2" 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:45.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:45.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:45.264 13:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:45.264 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:45.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:45.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:27:45.524 00:27:45.524 --- 10.0.0.3 ping statistics --- 00:27:45.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.524 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:45.524 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:45.524 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:27:45.524 00:27:45.524 --- 10.0.0.4 ping statistics --- 00:27:45.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.524 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:45.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:27:45.524 00:27:45.524 --- 10.0.0.1 ping statistics --- 00:27:45.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.524 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:45.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:27:45.524 00:27:45.524 --- 10.0.0.2 ping statistics --- 00:27:45.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.524 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=104387 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 104387 00:27:45.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 104387 ']' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:45.524 13:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:45.524 [2024-11-06 13:57:26.385462] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:45.524 [2024-11-06 13:57:26.386538] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:45.525 [2024-11-06 13:57:26.386587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.784 [2024-11-06 13:57:26.537972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.784 [2024-11-06 13:57:26.601917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.784 [2024-11-06 13:57:26.602136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.784 [2024-11-06 13:57:26.602296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.784 [2024-11-06 13:57:26.602344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.784 [2024-11-06 13:57:26.602371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.784 [2024-11-06 13:57:26.603901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.784 [2024-11-06 13:57:26.604048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.784 [2024-11-06 13:57:26.604235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.784 [2024-11-06 13:57:26.604237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.784 [2024-11-06 13:57:26.605793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:46.352 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:46.352 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:27:46.352 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:46.352 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.352 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.611 [2024-11-06 13:57:27.438322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:46.611 [2024-11-06 13:57:27.438504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:46.611 [2024-11-06 13:57:27.439988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:46.611 [2024-11-06 13:57:27.440594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.611 [2024-11-06 13:57:27.449999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.611 Malloc0 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.611 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.612 [2024-11-06 13:57:27.538346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:46.612 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=104440 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=104442 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.872 { 00:27:46.872 "params": { 00:27:46.872 "name": "Nvme$subsystem", 00:27:46.872 "trtype": "$TEST_TRANSPORT", 00:27:46.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.872 "adrfam": "ipv4", 00:27:46.872 "trsvcid": "$NVMF_PORT", 00:27:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.872 "hdgst": ${hdgst:-false}, 00:27:46.872 "ddgst": ${ddgst:-false} 00:27:46.872 }, 00:27:46.872 "method": "bdev_nvme_attach_controller" 00:27:46.872 } 00:27:46.872 EOF 00:27:46.872 )") 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.872 { 00:27:46.872 "params": { 00:27:46.872 "name": "Nvme$subsystem", 00:27:46.872 "trtype": "$TEST_TRANSPORT", 00:27:46.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.872 "adrfam": "ipv4", 00:27:46.872 "trsvcid": "$NVMF_PORT", 00:27:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.872 "hdgst": ${hdgst:-false}, 00:27:46.872 "ddgst": ${ddgst:-false} 00:27:46.872 }, 00:27:46.872 "method": "bdev_nvme_attach_controller" 00:27:46.872 } 00:27:46.872 EOF 00:27:46.872 )") 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=104445 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.872 { 00:27:46.872 "params": { 00:27:46.872 "name": "Nvme$subsystem", 00:27:46.872 "trtype": "$TEST_TRANSPORT", 00:27:46.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.872 "adrfam": "ipv4", 00:27:46.872 "trsvcid": "$NVMF_PORT", 00:27:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.872 "hdgst": ${hdgst:-false}, 00:27:46.872 "ddgst": ${ddgst:-false} 00:27:46.872 }, 00:27:46.872 "method": "bdev_nvme_attach_controller" 00:27:46.872 } 00:27:46.872 EOF 00:27:46.872 )") 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=104448 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.872 { 00:27:46.872 "params": { 00:27:46.872 "name": "Nvme$subsystem", 00:27:46.872 "trtype": "$TEST_TRANSPORT", 00:27:46.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.872 "adrfam": "ipv4", 00:27:46.872 "trsvcid": "$NVMF_PORT", 00:27:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.872 "hdgst": ${hdgst:-false}, 00:27:46.872 "ddgst": ${ddgst:-false} 00:27:46.872 }, 00:27:46.872 "method": "bdev_nvme_attach_controller" 00:27:46.872 } 00:27:46.872 EOF 00:27:46.872 )") 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.872 "params": { 00:27:46.872 "name": "Nvme1", 00:27:46.872 "trtype": "tcp", 00:27:46.872 "traddr": "10.0.0.3", 00:27:46.872 "adrfam": "ipv4", 00:27:46.872 "trsvcid": "4420", 00:27:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.872 "hdgst": false, 00:27:46.872 "ddgst": false 00:27:46.872 }, 00:27:46.872 "method": "bdev_nvme_attach_controller" 00:27:46.872 }' 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.872 "params": { 00:27:46.872 "name": "Nvme1", 00:27:46.872 "trtype": "tcp", 00:27:46.872 "traddr": "10.0.0.3", 00:27:46.872 "adrfam": "ipv4", 00:27:46.872 "trsvcid": "4420", 00:27:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.872 "hdgst": false, 00:27:46.872 "ddgst": false 00:27:46.872 }, 00:27:46.872 "method": "bdev_nvme_attach_controller" 00:27:46.872 }' 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.872 "params": { 00:27:46.872 "name": "Nvme1", 00:27:46.872 "trtype": "tcp", 00:27:46.872 "traddr": "10.0.0.3", 00:27:46.872 "adrfam": "ipv4", 00:27:46.872 "trsvcid": "4420", 00:27:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.872 "hdgst": false, 00:27:46.872 "ddgst": false 00:27:46.872 }, 00:27:46.872 "method": "bdev_nvme_attach_controller" 00:27:46.872 }' 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:46.872 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.872 "params": { 00:27:46.872 "name": "Nvme1", 00:27:46.872 "trtype": "tcp", 00:27:46.872 "traddr": "10.0.0.3", 00:27:46.872 "adrfam": "ipv4", 00:27:46.872 "trsvcid": "4420", 00:27:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.872 "hdgst": false, 00:27:46.872 "ddgst": false 00:27:46.872 }, 00:27:46.872 "method": "bdev_nvme_attach_controller" 00:27:46.872 }' 00:27:46.872 [2024-11-06 13:57:27.602731] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:46.872 [2024-11-06 13:57:27.602804] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:27:46.872 [2024-11-06 13:57:27.604003] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:46.873 [2024-11-06 13:57:27.604064] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:27:46.873 [2024-11-06 13:57:27.612537] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:46.873 [2024-11-06 13:57:27.612614] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:46.873 [2024-11-06 13:57:27.618369] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:46.873 [2024-11-06 13:57:27.618422] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:27:46.873 13:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 104440 00:27:47.132 [2024-11-06 13:57:27.853509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.132 [2024-11-06 13:57:27.922512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:47.132 [2024-11-06 13:57:27.951441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.132 [2024-11-06 13:57:28.019332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:47.391 [2024-11-06 13:57:28.078866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.391 [2024-11-06 13:57:28.147890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:47.391 [2024-11-06 13:57:28.196588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.391 Running I/O for 1 seconds... 00:27:47.391 Running I/O for 1 seconds... 00:27:47.391 [2024-11-06 13:57:28.272614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:47.391 Running I/O for 1 seconds... 00:27:47.650 Running I/O for 1 seconds... 00:27:48.587 9612.00 IOPS, 37.55 MiB/s [2024-11-06T13:57:29.521Z] 250728.00 IOPS, 979.41 MiB/s 00:27:48.588 Latency(us) 00:27:48.588 [2024-11-06T13:57:29.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.588 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:27:48.588 Nvme1n1 : 1.00 250353.21 977.94 0.00 0.00 509.25 236.88 1487.06 00:27:48.588 [2024-11-06T13:57:29.521Z] =================================================================================================================== 00:27:48.588 [2024-11-06T13:57:29.521Z] Total : 250353.21 977.94 0.00 0.00 509.25 236.88 1487.06 00:27:48.588 00:27:48.588 Latency(us) 00:27:48.588 [2024-11-06T13:57:29.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.588 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:27:48.588 Nvme1n1 : 1.01 9630.56 37.62 0.00 0.00 13215.71 6711.52 18423.78 00:27:48.588 [2024-11-06T13:57:29.521Z] =================================================================================================================== 00:27:48.588 [2024-11-06T13:57:29.521Z] Total : 9630.56 37.62 0.00 0.00 13215.71 6711.52 18423.78 00:27:48.588 5013.00 IOPS, 19.58 MiB/s 00:27:48.588 Latency(us) 00:27:48.588 [2024-11-06T13:57:29.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.588 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:27:48.588 Nvme1n1 : 1.01 5112.24 19.97 0.00 0.00 24935.07 1855.54 30530.83 00:27:48.588 [2024-11-06T13:57:29.521Z] =================================================================================================================== 00:27:48.588 [2024-11-06T13:57:29.521Z] Total : 5112.24 19.97 0.00 0.00 24935.07 1855.54 30530.83 00:27:48.588 8927.00 IOPS, 34.87 MiB/s 00:27:48.588 Latency(us) 00:27:48.588 [2024-11-06T13:57:29.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.588 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:27:48.588 Nvme1n1 : 1.01 9024.22 35.25 0.00 0.00 14148.61 2513.53 24529.94 00:27:48.588 [2024-11-06T13:57:29.521Z] =================================================================================================================== 00:27:48.588 [2024-11-06T13:57:29.521Z] Total : 9024.22 35.25 0.00 0.00 14148.61 2513.53 24529.94 00:27:48.847 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 104442 00:27:48.847 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 104445 00:27:48.847 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 104448 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.106 rmmod nvme_tcp 00:27:49.106 rmmod nvme_fabrics 00:27:49.106 rmmod nvme_keyring 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 104387 ']' 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 104387 00:27:49.106 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 104387 ']' 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 104387 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104387 00:27:49.107 killing process with pid 104387 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104387' 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 104387 00:27:49.107 13:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 104387 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:49.366 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:49.367 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:27:49.628 00:27:49.628 real 0m5.019s 00:27:49.628 user 0m15.537s 00:27:49.628 sys 0m3.150s 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:49.628 ************************************ 00:27:49.628 END TEST nvmf_bdev_io_wait 00:27:49.628 ************************************ 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:49.628 ************************************ 00:27:49.628 START TEST nvmf_queue_depth 00:27:49.628 ************************************ 00:27:49.628 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:27:49.890 * Looking for test storage... 00:27:49.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:49.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.890 --rc genhtml_branch_coverage=1 00:27:49.890 --rc genhtml_function_coverage=1 00:27:49.890 --rc genhtml_legend=1 00:27:49.890 --rc geninfo_all_blocks=1 00:27:49.890 --rc geninfo_unexecuted_blocks=1 00:27:49.890 00:27:49.890 ' 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:49.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.890 --rc genhtml_branch_coverage=1 00:27:49.890 --rc genhtml_function_coverage=1 00:27:49.890 --rc genhtml_legend=1 00:27:49.890 --rc geninfo_all_blocks=1 00:27:49.890 --rc geninfo_unexecuted_blocks=1 00:27:49.890 00:27:49.890 ' 00:27:49.890 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:49.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.890 --rc genhtml_branch_coverage=1 00:27:49.890 --rc genhtml_function_coverage=1 00:27:49.890 --rc genhtml_legend=1 00:27:49.890 --rc geninfo_all_blocks=1 00:27:49.890 --rc geninfo_unexecuted_blocks=1 00:27:49.890 00:27:49.890 ' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:49.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.891 --rc genhtml_branch_coverage=1 00:27:49.891 --rc genhtml_function_coverage=1 00:27:49.891 --rc genhtml_legend=1 00:27:49.891 --rc geninfo_all_blocks=1 00:27:49.891 --rc geninfo_unexecuted_blocks=1 00:27:49.891 00:27:49.891 ' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:49.891 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:50.151 Cannot find device "nvmf_init_br" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:50.151 Cannot find device "nvmf_init_br2" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:50.151 Cannot find device "nvmf_tgt_br" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:50.151 Cannot find device "nvmf_tgt_br2" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:50.151 Cannot find device "nvmf_init_br" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:50.151 Cannot find device "nvmf_init_br2" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:50.151 Cannot find device "nvmf_tgt_br" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:50.151 Cannot find device "nvmf_tgt_br2" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:50.151 Cannot find device "nvmf_br" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:50.151 Cannot find device "nvmf_init_if" 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:27:50.151 13:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:50.151 Cannot find device "nvmf_init_if2" 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:50.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:50.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:50.151 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:50.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:50.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:27:50.412 00:27:50.412 --- 10.0.0.3 ping statistics --- 00:27:50.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.412 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:50.412 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:50.412 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:27:50.412 00:27:50.412 --- 10.0.0.4 ping statistics --- 00:27:50.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.412 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:27:50.412 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:50.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:27:50.672 00:27:50.672 --- 10.0.0.1 ping statistics --- 00:27:50.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.672 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:50.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:27:50.672 00:27:50.672 --- 10.0.0.2 ping statistics --- 00:27:50.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.672 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=104736 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 104736 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 104736 ']' 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.672 13:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:27:50.672 [2024-11-06 13:57:31.451400] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:50.672 [2024-11-06 13:57:31.452426] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:50.672 [2024-11-06 13:57:31.452482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.931 [2024-11-06 13:57:31.612547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.931 [2024-11-06 13:57:31.653282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.931 [2024-11-06 13:57:31.653333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.931 [2024-11-06 13:57:31.653344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.931 [2024-11-06 13:57:31.653353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.931 [2024-11-06 13:57:31.653377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.931 [2024-11-06 13:57:31.653660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.931 [2024-11-06 13:57:31.728684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:50.931 [2024-11-06 13:57:31.728982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:51.501 [2024-11-06 13:57:32.422577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.501 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:51.794 Malloc0 00:27:51.794 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.794 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:51.794 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.794 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:51.794 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.794 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:51.794 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:51.795 [2024-11-06 13:57:32.502792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=104786 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 104786 /var/tmp/bdevperf.sock 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 104786 ']' 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:51.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:51.795 13:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:51.795 [2024-11-06 13:57:32.563011] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:27:51.795 [2024-11-06 13:57:32.563106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104786 ] 00:27:52.055 [2024-11-06 13:57:32.718092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.055 [2024-11-06 13:57:32.762126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.623 13:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:52.623 13:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:27:52.623 13:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:52.623 13:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.623 13:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:52.623 NVMe0n1 00:27:52.623 13:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.623 13:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:52.882 Running I/O for 10 seconds... 00:27:54.758 10636.00 IOPS, 41.55 MiB/s [2024-11-06T13:57:36.630Z] 11069.50 IOPS, 43.24 MiB/s [2024-11-06T13:57:38.009Z] 11324.00 IOPS, 44.23 MiB/s [2024-11-06T13:57:38.946Z] 11543.50 IOPS, 45.09 MiB/s [2024-11-06T13:57:39.884Z] 11631.20 IOPS, 45.43 MiB/s [2024-11-06T13:57:40.822Z] 11740.50 IOPS, 45.86 MiB/s [2024-11-06T13:57:41.760Z] 11796.71 IOPS, 46.08 MiB/s [2024-11-06T13:57:42.697Z] 11844.75 IOPS, 46.27 MiB/s [2024-11-06T13:57:43.635Z] 11859.00 IOPS, 46.32 MiB/s [2024-11-06T13:57:43.895Z] 11885.60 IOPS, 46.43 MiB/s 00:28:02.962 Latency(us) 00:28:02.962 [2024-11-06T13:57:43.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.962 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:02.962 Verification LBA range: start 0x0 length 0x4000 00:28:02.962 NVMe0n1 : 10.06 11914.50 46.54 0.00 0.00 85629.33 21476.86 86749.66 00:28:02.962 [2024-11-06T13:57:43.895Z] =================================================================================================================== 00:28:02.962 [2024-11-06T13:57:43.895Z] Total : 11914.50 46.54 0.00 0.00 85629.33 21476.86 86749.66 00:28:02.962 { 00:28:02.962 "results": [ 00:28:02.962 { 00:28:02.962 "job": "NVMe0n1", 00:28:02.962 "core_mask": "0x1", 00:28:02.962 "workload": "verify", 00:28:02.962 "status": "finished", 00:28:02.962 "verify_range": { 00:28:02.962 "start": 0, 00:28:02.962 "length": 16384 00:28:02.962 }, 00:28:02.962 "queue_depth": 1024, 00:28:02.962 "io_size": 4096, 00:28:02.962 "runtime": 10.061688, 00:28:02.962 "iops": 11914.501821165593, 00:28:02.962 "mibps": 46.5410227389281, 00:28:02.962 "io_failed": 0, 00:28:02.962 "io_timeout": 0, 00:28:02.962 "avg_latency_us": 85629.33234835908, 00:28:02.962 "min_latency_us": 21476.857831325302, 00:28:02.962 "max_latency_us": 86749.66104417671 00:28:02.962 } 00:28:02.962 ], 00:28:02.962 "core_count": 1 00:28:02.962 } 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 104786 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 104786 ']' 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 104786 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104786 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:02.962 killing process with pid 104786 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104786' 00:28:02.962 Received shutdown signal, test time was about 10.000000 seconds 00:28:02.962 00:28:02.962 Latency(us) 00:28:02.962 [2024-11-06T13:57:43.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.962 [2024-11-06T13:57:43.895Z] =================================================================================================================== 00:28:02.962 [2024-11-06T13:57:43.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 104786 00:28:02.962 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 104786 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.222 rmmod nvme_tcp 00:28:03.222 rmmod nvme_fabrics 00:28:03.222 rmmod nvme_keyring 00:28:03.222 13:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 104736 ']' 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 104736 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 104736 ']' 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 104736 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104736 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104736' 00:28:03.222 killing process with pid 104736 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 104736 00:28:03.222 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 104736 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:03.499 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:03.758 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:03.759 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.759 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.759 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:28:04.018 00:28:04.018 real 0m14.191s 00:28:04.018 user 0m21.608s 00:28:04.018 sys 0m3.342s 00:28:04.018 ************************************ 00:28:04.018 END TEST nvmf_queue_depth 00:28:04.018 ************************************ 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:04.018 ************************************ 00:28:04.018 START TEST nvmf_target_multipath 00:28:04.018 ************************************ 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:04.018 * Looking for test storage... 00:28:04.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:28:04.018 13:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:04.278 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:04.278 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.278 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.278 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:04.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.279 --rc genhtml_branch_coverage=1 00:28:04.279 --rc genhtml_function_coverage=1 00:28:04.279 --rc genhtml_legend=1 00:28:04.279 --rc geninfo_all_blocks=1 00:28:04.279 --rc geninfo_unexecuted_blocks=1 00:28:04.279 00:28:04.279 ' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:04.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.279 --rc genhtml_branch_coverage=1 00:28:04.279 --rc genhtml_function_coverage=1 00:28:04.279 --rc genhtml_legend=1 00:28:04.279 --rc geninfo_all_blocks=1 00:28:04.279 --rc geninfo_unexecuted_blocks=1 00:28:04.279 00:28:04.279 ' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:04.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.279 --rc genhtml_branch_coverage=1 00:28:04.279 --rc genhtml_function_coverage=1 00:28:04.279 --rc genhtml_legend=1 00:28:04.279 --rc geninfo_all_blocks=1 00:28:04.279 --rc geninfo_unexecuted_blocks=1 00:28:04.279 00:28:04.279 ' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:04.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.279 --rc genhtml_branch_coverage=1 00:28:04.279 --rc genhtml_function_coverage=1 00:28:04.279 --rc genhtml_legend=1 00:28:04.279 --rc geninfo_all_blocks=1 00:28:04.279 --rc geninfo_unexecuted_blocks=1 00:28:04.279 00:28:04.279 ' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.279 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:04.280 Cannot find device "nvmf_init_br" 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:04.280 Cannot find device "nvmf_init_br2" 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:04.280 Cannot find device "nvmf_tgt_br" 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:04.280 Cannot find device "nvmf_tgt_br2" 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:04.280 Cannot find device "nvmf_init_br" 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:28:04.280 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:04.539 Cannot find device "nvmf_init_br2" 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:04.539 Cannot find device "nvmf_tgt_br" 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:04.539 Cannot find device "nvmf_tgt_br2" 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:04.539 Cannot find device "nvmf_br" 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:04.539 Cannot find device "nvmf_init_if" 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:04.539 Cannot find device "nvmf_init_if2" 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:04.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:04.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:04.539 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:04.799 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:04.799 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:28:04.799 00:28:04.799 --- 10.0.0.3 ping statistics --- 00:28:04.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.799 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:04.799 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:04.799 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:28:04.799 00:28:04.799 --- 10.0.0.4 ping statistics --- 00:28:04.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.799 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:04.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:28:04.799 00:28:04.799 --- 10.0.0.1 ping statistics --- 00:28:04.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.799 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:04.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:28:04.799 00:28:04.799 --- 10.0.0.2 ping statistics --- 00:28:04.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.799 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=105176 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 105176 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 105176 ']' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:04.799 13:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:04.799 [2024-11-06 13:57:45.709279] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:04.799 [2024-11-06 13:57:45.710617] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:28:04.799 [2024-11-06 13:57:45.710680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.059 [2024-11-06 13:57:45.868263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.059 [2024-11-06 13:57:45.935452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.059 [2024-11-06 13:57:45.935698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.059 [2024-11-06 13:57:45.935850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.059 [2024-11-06 13:57:45.935919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.059 [2024-11-06 13:57:45.935946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.059 [2024-11-06 13:57:45.937313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.059 [2024-11-06 13:57:45.937491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.059 [2024-11-06 13:57:45.937687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.059 [2024-11-06 13:57:45.938570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.319 [2024-11-06 13:57:46.070528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:05.319 [2024-11-06 13:57:46.070562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:05.319 [2024-11-06 13:57:46.070927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:05.319 [2024-11-06 13:57:46.071026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:05.319 [2024-11-06 13:57:46.072163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:05.886 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:05.886 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:28:05.886 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:05.886 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:05.886 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:05.886 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.886 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:06.145 [2024-11-06 13:57:46.883079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.145 13:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:06.405 Malloc0 00:28:06.405 13:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:28:06.664 13:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.923 13:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:07.181 [2024-11-06 13:57:47.938943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:07.181 13:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:28:07.439 [2024-11-06 13:57:48.142877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:28:07.439 13:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:28:07.439 13:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:28:07.697 13:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:28:07.697 13:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:28:07.697 13:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:28:07.697 13:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:28:07.697 13:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:28:09.595 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:28:09.596 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=105315 00:28:09.596 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:28:09.596 13:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:28:09.596 [global] 00:28:09.596 thread=1 00:28:09.596 invalidate=1 00:28:09.596 rw=randrw 00:28:09.596 time_based=1 00:28:09.596 runtime=6 00:28:09.596 ioengine=libaio 00:28:09.596 direct=1 00:28:09.596 bs=4096 00:28:09.596 iodepth=128 00:28:09.596 norandommap=0 00:28:09.596 numjobs=1 00:28:09.596 00:28:09.596 verify_dump=1 00:28:09.596 verify_backlog=512 00:28:09.596 verify_state_save=0 00:28:09.596 do_verify=1 00:28:09.596 verify=crc32c-intel 00:28:09.596 [job0] 00:28:09.596 filename=/dev/nvme0n1 00:28:09.854 Could not set queue depth (nvme0n1) 00:28:09.854 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:09.854 fio-3.35 00:28:09.854 Starting 1 thread 00:28:10.787 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:10.787 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:11.045 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:11.046 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:11.046 13:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:12.423 13:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:12.423 13:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:12.423 13:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:12.423 13:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:12.423 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:12.682 13:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:13.617 13:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:13.617 13:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:13.617 13:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:13.617 13:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 105315 00:28:16.145 00:28:16.145 job0: (groupid=0, jobs=1): err= 0: pid=105337: Wed Nov 6 13:57:56 2024 00:28:16.145 read: IOPS=14.3k, BW=55.8MiB/s (58.5MB/s)(335MiB/6005msec) 00:28:16.145 slat (usec): min=5, max=4776, avg=37.28, stdev=153.51 00:28:16.145 clat (usec): min=301, max=52982, avg=6108.09, stdev=1705.17 00:28:16.145 lat (usec): min=406, max=52993, avg=6145.37, stdev=1708.95 00:28:16.145 clat percentiles (usec): 00:28:16.145 | 1.00th=[ 3687], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5407], 00:28:16.145 | 30.00th=[ 5604], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6128], 00:28:16.145 | 70.00th=[ 6325], 80.00th=[ 6587], 90.00th=[ 7177], 95.00th=[ 8094], 00:28:16.145 | 99.00th=[ 9896], 99.50th=[12518], 99.90th=[15795], 99.95th=[49021], 00:28:16.145 | 99.99th=[52691] 00:28:16.145 bw ( KiB/s): min=14200, max=39848, per=51.16%, avg=29217.00, stdev=9709.04, samples=11 00:28:16.145 iops : min= 3550, max= 9962, avg=7304.18, stdev=2427.22, samples=11 00:28:16.145 write: IOPS=8831, BW=34.5MiB/s (36.2MB/s)(170MiB/4940msec); 0 zone resets 00:28:16.145 slat (usec): min=14, max=2584, avg=50.71, stdev=85.78 00:28:16.145 clat (usec): min=485, max=52530, avg=5440.10, stdev=1882.77 00:28:16.145 lat (usec): min=547, max=52556, avg=5490.81, stdev=1884.33 00:28:16.145 clat percentiles (usec): 00:28:16.145 | 1.00th=[ 3130], 5.00th=[ 3884], 10.00th=[ 4293], 20.00th=[ 4817], 00:28:16.145 | 30.00th=[ 5080], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5538], 00:28:16.145 | 70.00th=[ 5669], 80.00th=[ 5866], 90.00th=[ 6128], 95.00th=[ 6652], 00:28:16.145 | 99.00th=[ 9372], 99.50th=[11338], 99.90th=[47973], 99.95th=[50070], 00:28:16.145 | 99.99th=[51643] 00:28:16.145 bw ( KiB/s): min=14584, max=39336, per=82.71%, avg=29218.36, stdev=9212.83, samples=11 00:28:16.145 iops : min= 3646, max= 9834, avg=7304.55, stdev=2303.17, samples=11 00:28:16.145 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:28:16.145 lat (msec) : 2=0.16%, 4=3.27%, 10=95.66%, 20=0.78%, 50=0.05% 00:28:16.145 lat (msec) : 100=0.05% 00:28:16.145 cpu : usr=6.93%, sys=32.98%, ctx=12368, majf=0, minf=163 00:28:16.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:28:16.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:16.145 issued rwts: total=85738,43626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:16.145 00:28:16.145 Run status group 0 (all jobs): 00:28:16.145 READ: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=335MiB (351MB), run=6005-6005msec 00:28:16.145 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=170MiB (179MB), run=4940-4940msec 00:28:16.145 00:28:16.145 Disk stats (read/write): 00:28:16.145 nvme0n1: ios=84628/42673, merge=0/0, ticks=451278/202321, in_queue=653599, util=98.66% 00:28:16.145 13:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:28:16.145 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:28:16.403 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:28:16.403 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:28:16.403 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:16.403 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:16.403 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:16.403 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:28:16.404 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:28:16.404 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:28:16.404 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:16.404 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:16.404 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:16.404 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:28:16.404 13:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:17.338 13:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:17.338 13:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:17.338 13:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:28:17.338 13:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:28:17.338 13:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=105461 00:28:17.338 13:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:28:17.338 13:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:28:17.596 [global] 00:28:17.596 thread=1 00:28:17.596 invalidate=1 00:28:17.596 rw=randrw 00:28:17.596 time_based=1 00:28:17.596 runtime=6 00:28:17.596 ioengine=libaio 00:28:17.596 direct=1 00:28:17.596 bs=4096 00:28:17.596 iodepth=128 00:28:17.596 norandommap=0 00:28:17.596 numjobs=1 00:28:17.596 00:28:17.596 verify_dump=1 00:28:17.596 verify_backlog=512 00:28:17.596 verify_state_save=0 00:28:17.596 do_verify=1 00:28:17.596 verify=crc32c-intel 00:28:17.596 [job0] 00:28:17.596 filename=/dev/nvme0n1 00:28:17.596 Could not set queue depth (nvme0n1) 00:28:17.596 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:17.596 fio-3.35 00:28:17.596 Starting 1 thread 00:28:18.531 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:18.788 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:28:18.788 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:18.789 13:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:20.252 13:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:20.252 13:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:20.252 13:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:20.252 13:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:20.252 13:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:20.252 13:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:21.629 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:21.629 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:21.629 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:21.629 13:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 105461 00:28:24.161 00:28:24.161 job0: (groupid=0, jobs=1): err= 0: pid=105482: Wed Nov 6 13:58:04 2024 00:28:24.161 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(322MiB/6005msec) 00:28:24.161 slat (usec): min=5, max=6519, avg=34.52, stdev=146.35 00:28:24.161 clat (usec): min=239, max=20729, avg=6273.70, stdev=2672.90 00:28:24.161 lat (usec): min=269, max=20741, avg=6308.22, stdev=2674.04 00:28:24.161 clat percentiles (usec): 00:28:24.161 | 1.00th=[ 766], 5.00th=[ 2008], 10.00th=[ 4359], 20.00th=[ 5211], 00:28:24.161 | 30.00th=[ 5473], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6128], 00:28:24.161 | 70.00th=[ 6325], 80.00th=[ 6783], 90.00th=[ 8455], 95.00th=[12649], 00:28:24.161 | 99.00th=[16450], 99.50th=[17171], 99.90th=[18482], 99.95th=[19006], 00:28:24.161 | 99.99th=[20055] 00:28:24.161 bw ( KiB/s): min=15016, max=36152, per=50.56%, avg=27726.91, stdev=7705.02, samples=11 00:28:24.161 iops : min= 3754, max= 9038, avg=6931.73, stdev=1926.25, samples=11 00:28:24.161 write: IOPS=8102, BW=31.6MiB/s (33.2MB/s)(168MiB/5299msec); 0 zone resets 00:28:24.161 slat (usec): min=13, max=4497, avg=47.84, stdev=80.99 00:28:24.161 clat (usec): min=174, max=18600, avg=5701.86, stdev=2749.76 00:28:24.161 lat (usec): min=242, max=18630, avg=5749.70, stdev=2748.96 00:28:24.161 clat percentiles (usec): 00:28:24.161 | 1.00th=[ 562], 5.00th=[ 1139], 10.00th=[ 3589], 20.00th=[ 4555], 00:28:24.161 | 30.00th=[ 5014], 40.00th=[ 5211], 50.00th=[ 5407], 60.00th=[ 5538], 00:28:24.161 | 70.00th=[ 5735], 80.00th=[ 5997], 90.00th=[ 7832], 95.00th=[13435], 00:28:24.161 | 99.00th=[15139], 99.50th=[15664], 99.90th=[17171], 99.95th=[17695], 00:28:24.161 | 99.99th=[18220] 00:28:24.161 bw ( KiB/s): min=15720, max=35712, per=85.52%, avg=27716.73, stdev=7300.80, samples=11 00:28:24.161 iops : min= 3930, max= 8928, avg=6929.18, stdev=1825.20, samples=11 00:28:24.161 lat (usec) : 250=0.01%, 500=0.39%, 750=1.03%, 1000=1.37% 00:28:24.161 lat (msec) : 2=2.82%, 4=4.21%, 10=83.05%, 20=7.12%, 50=0.01% 00:28:24.161 cpu : usr=6.90%, sys=31.66%, ctx=13480, majf=0, minf=127 00:28:24.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:28:24.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:24.161 issued rwts: total=82320,42933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:24.161 00:28:24.161 Run status group 0 (all jobs): 00:28:24.161 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=322MiB (337MB), run=6005-6005msec 00:28:24.161 WRITE: bw=31.6MiB/s (33.2MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=168MiB (176MB), run=5299-5299msec 00:28:24.161 00:28:24.161 Disk stats (read/write): 00:28:24.161 nvme0n1: ios=81275/41959, merge=0/0, ticks=457013/216668, in_queue=673681, util=98.67% 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:24.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:28:24.161 13:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.161 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:28:24.161 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:28:24.162 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:28:24.162 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:28:24.162 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.162 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:24.162 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.162 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:24.162 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.162 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.162 rmmod nvme_tcp 00:28:24.162 rmmod nvme_fabrics 00:28:24.421 rmmod nvme_keyring 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 105176 ']' 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 105176 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 105176 ']' 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 105176 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 105176 00:28:24.421 killing process with pid 105176 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 105176' 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 105176 00:28:24.421 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 105176 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:24.681 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:24.682 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:24.682 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:24.682 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:24.682 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:28:24.941 ************************************ 00:28:24.941 END TEST nvmf_target_multipath 00:28:24.941 ************************************ 00:28:24.941 00:28:24.941 real 0m21.020s 00:28:24.941 user 1m9.283s 00:28:24.941 sys 0m10.875s 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:24.941 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:25.202 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:25.202 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:25.202 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:25.202 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:25.202 ************************************ 00:28:25.202 START TEST nvmf_zcopy 00:28:25.202 ************************************ 00:28:25.202 13:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:25.202 * Looking for test storage... 00:28:25.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.202 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:25.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.462 --rc genhtml_branch_coverage=1 00:28:25.462 --rc genhtml_function_coverage=1 00:28:25.462 --rc genhtml_legend=1 00:28:25.462 --rc geninfo_all_blocks=1 00:28:25.462 --rc geninfo_unexecuted_blocks=1 00:28:25.462 00:28:25.462 ' 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:25.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.462 --rc genhtml_branch_coverage=1 00:28:25.462 --rc genhtml_function_coverage=1 00:28:25.462 --rc genhtml_legend=1 00:28:25.462 --rc geninfo_all_blocks=1 00:28:25.462 --rc geninfo_unexecuted_blocks=1 00:28:25.462 00:28:25.462 ' 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:25.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.462 --rc genhtml_branch_coverage=1 00:28:25.462 --rc genhtml_function_coverage=1 00:28:25.462 --rc genhtml_legend=1 00:28:25.462 --rc geninfo_all_blocks=1 00:28:25.462 --rc geninfo_unexecuted_blocks=1 00:28:25.462 00:28:25.462 ' 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:25.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.462 --rc genhtml_branch_coverage=1 00:28:25.462 --rc genhtml_function_coverage=1 00:28:25.462 --rc genhtml_legend=1 00:28:25.462 --rc geninfo_all_blocks=1 00:28:25.462 --rc geninfo_unexecuted_blocks=1 00:28:25.462 00:28:25.462 ' 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.462 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:25.463 Cannot find device "nvmf_init_br" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:25.463 Cannot find device "nvmf_init_br2" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:25.463 Cannot find device "nvmf_tgt_br" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:25.463 Cannot find device "nvmf_tgt_br2" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:25.463 Cannot find device "nvmf_init_br" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:25.463 Cannot find device "nvmf_init_br2" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:25.463 Cannot find device "nvmf_tgt_br" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:25.463 Cannot find device "nvmf_tgt_br2" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:25.463 Cannot find device "nvmf_br" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:25.463 Cannot find device "nvmf_init_if" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:25.463 Cannot find device "nvmf_init_if2" 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:25.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:28:25.463 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:25.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:25.723 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:25.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:25.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:28:25.983 00:28:25.983 --- 10.0.0.3 ping statistics --- 00:28:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.983 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:25.983 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:25.983 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:28:25.983 00:28:25.983 --- 10.0.0.4 ping statistics --- 00:28:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.983 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:25.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:28:25.983 00:28:25.983 --- 10.0.0.1 ping statistics --- 00:28:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.983 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:25.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:28:25.983 00:28:25.983 --- 10.0.0.2 ping statistics --- 00:28:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.983 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=105815 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:25.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 105815 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 105815 ']' 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:25.983 13:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.983 [2024-11-06 13:58:06.782873] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:25.983 [2024-11-06 13:58:06.783803] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:28:25.983 [2024-11-06 13:58:06.783847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.242 [2024-11-06 13:58:06.936377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.242 [2024-11-06 13:58:07.013991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.242 [2024-11-06 13:58:07.014053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.242 [2024-11-06 13:58:07.014063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.242 [2024-11-06 13:58:07.014072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.242 [2024-11-06 13:58:07.014079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.242 [2024-11-06 13:58:07.014447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.242 [2024-11-06 13:58:07.148470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:26.242 [2024-11-06 13:58:07.148999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:26.810 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:26.810 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:28:26.810 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.810 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.810 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:27.070 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.070 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:27.070 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:27.071 [2024-11-06 13:58:07.763341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:27.071 [2024-11-06 13:58:07.791631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:27.071 malloc0 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:27.071 { 00:28:27.071 "params": { 00:28:27.071 "name": "Nvme$subsystem", 00:28:27.071 "trtype": "$TEST_TRANSPORT", 00:28:27.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.071 "adrfam": "ipv4", 00:28:27.071 "trsvcid": "$NVMF_PORT", 00:28:27.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.071 "hdgst": ${hdgst:-false}, 00:28:27.071 "ddgst": ${ddgst:-false} 00:28:27.071 }, 00:28:27.071 "method": "bdev_nvme_attach_controller" 00:28:27.071 } 00:28:27.071 EOF 00:28:27.071 )") 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:27.071 13:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:27.071 "params": { 00:28:27.071 "name": "Nvme1", 00:28:27.071 "trtype": "tcp", 00:28:27.071 "traddr": "10.0.0.3", 00:28:27.071 "adrfam": "ipv4", 00:28:27.071 "trsvcid": "4420", 00:28:27.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.071 "hdgst": false, 00:28:27.071 "ddgst": false 00:28:27.071 }, 00:28:27.071 "method": "bdev_nvme_attach_controller" 00:28:27.071 }' 00:28:27.071 [2024-11-06 13:58:07.908089] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:28:27.071 [2024-11-06 13:58:07.908159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105866 ] 00:28:27.331 [2024-11-06 13:58:08.061119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.331 [2024-11-06 13:58:08.123952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.590 Running I/O for 10 seconds... 00:28:29.478 8000.00 IOPS, 62.50 MiB/s [2024-11-06T13:58:11.363Z] 8039.00 IOPS, 62.80 MiB/s [2024-11-06T13:58:12.740Z] 8037.67 IOPS, 62.79 MiB/s [2024-11-06T13:58:13.677Z] 8012.25 IOPS, 62.60 MiB/s [2024-11-06T13:58:14.618Z] 7971.80 IOPS, 62.28 MiB/s [2024-11-06T13:58:15.552Z] 7940.50 IOPS, 62.04 MiB/s [2024-11-06T13:58:16.487Z] 7950.14 IOPS, 62.11 MiB/s [2024-11-06T13:58:17.421Z] 7965.12 IOPS, 62.23 MiB/s [2024-11-06T13:58:18.392Z] 7972.89 IOPS, 62.29 MiB/s [2024-11-06T13:58:18.392Z] 7985.80 IOPS, 62.39 MiB/s 00:28:37.459 Latency(us) 00:28:37.459 [2024-11-06T13:58:18.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.459 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:37.459 Verification LBA range: start 0x0 length 0x1000 00:28:37.459 Nvme1n1 : 10.01 7989.38 62.42 0.00 0.00 15977.76 2263.49 22003.25 00:28:37.459 [2024-11-06T13:58:18.392Z] =================================================================================================================== 00:28:37.459 [2024-11-06T13:58:18.392Z] Total : 7989.38 62.42 0.00 0.00 15977.76 2263.49 22003.25 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=105984 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.719 { 00:28:37.719 "params": { 00:28:37.719 "name": "Nvme$subsystem", 00:28:37.719 "trtype": "$TEST_TRANSPORT", 00:28:37.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.719 "adrfam": "ipv4", 00:28:37.719 "trsvcid": "$NVMF_PORT", 00:28:37.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.719 "hdgst": ${hdgst:-false}, 00:28:37.719 "ddgst": ${ddgst:-false} 00:28:37.719 }, 00:28:37.719 "method": "bdev_nvme_attach_controller" 00:28:37.719 } 00:28:37.719 EOF 00:28:37.719 )") 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:37.719 [2024-11-06 13:58:18.642994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.719 [2024-11-06 13:58:18.643036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.719 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.719 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:37.979 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:37.979 13:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.979 "params": { 00:28:37.979 "name": "Nvme1", 00:28:37.979 "trtype": "tcp", 00:28:37.979 "traddr": "10.0.0.3", 00:28:37.979 "adrfam": "ipv4", 00:28:37.979 "trsvcid": "4420", 00:28:37.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.979 "hdgst": false, 00:28:37.979 "ddgst": false 00:28:37.979 }, 00:28:37.979 "method": "bdev_nvme_attach_controller" 00:28:37.979 }' 00:28:37.979 [2024-11-06 13:58:18.658932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.658949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.674924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.674939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 [2024-11-06 13:58:18.677475] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:28:37.979 [2024-11-06 13:58:18.677540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105984 ] 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.686929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.686945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.698928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.698944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.710928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.710944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.722931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.722954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.734926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.734946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.746931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.746948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.758965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.758980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.770928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.770944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.782949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.782964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.798921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.798935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.979 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.979 [2024-11-06 13:58:18.810921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.979 [2024-11-06 13:58:18.810937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.980 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.980 [2024-11-06 13:58:18.822927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.980 [2024-11-06 13:58:18.822943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.980 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.980 [2024-11-06 13:58:18.830787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.980 [2024-11-06 13:58:18.834929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.980 [2024-11-06 13:58:18.834945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.980 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.980 [2024-11-06 13:58:18.846931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.980 [2024-11-06 13:58:18.846950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.980 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.980 [2024-11-06 13:58:18.858928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.980 [2024-11-06 13:58:18.858945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.980 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.980 [2024-11-06 13:58:18.870927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.980 [2024-11-06 13:58:18.870945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.980 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.980 [2024-11-06 13:58:18.886948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.980 [2024-11-06 13:58:18.886973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.980 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.980 [2024-11-06 13:58:18.895873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.980 [2024-11-06 13:58:18.898945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.980 [2024-11-06 13:58:18.898982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.980 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:37.980 [2024-11-06 13:58:18.910936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.980 [2024-11-06 13:58:18.910960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:18.922934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:18.922957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:18.938954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:18.938974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:18.950928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:18.950946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:18.962926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:18.962943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:18.974926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:18.974955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:18.986929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:18.986947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:18.998927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:18.998946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.010929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.010947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.022929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.022944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.034929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.034946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.046956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.046986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.062935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.062959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.074939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.074962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.086941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.086964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.098947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.098973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.110942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.110965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 Running I/O for 5 seconds... 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.130967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.130996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.142380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.142410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.240 [2024-11-06 13:58:19.158740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.240 [2024-11-06 13:58:19.158772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.240 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.500 [2024-11-06 13:58:19.175548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.500 [2024-11-06 13:58:19.175578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.500 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.500 [2024-11-06 13:58:19.194202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.500 [2024-11-06 13:58:19.194231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.500 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.500 [2024-11-06 13:58:19.210473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.500 [2024-11-06 13:58:19.210503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.500 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.500 [2024-11-06 13:58:19.226969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.500 [2024-11-06 13:58:19.226998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.241230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.241258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.257007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.257037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.272604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.272633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.287963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.287993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.303855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.303891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.319376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.319407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.338172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.338202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.355483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.355513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.374629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.374661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.391370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.391399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.407700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.407734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.501 [2024-11-06 13:58:19.426465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.501 [2024-11-06 13:58:19.426497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.501 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.443359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.443388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.462703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.462736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.478820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.478854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.492189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.492219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.508290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.508320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.523641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.523671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.542912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.542940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.559061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.559094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.575194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.575222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.591290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.591318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.610076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.610104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.624666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.624695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.640215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.640243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.761 [2024-11-06 13:58:19.655783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.761 [2024-11-06 13:58:19.655811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.761 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.762 [2024-11-06 13:58:19.671348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.762 [2024-11-06 13:58:19.671376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.762 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:38.762 [2024-11-06 13:58:19.690727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.762 [2024-11-06 13:58:19.690756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.762 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.020 [2024-11-06 13:58:19.707500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.020 [2024-11-06 13:58:19.707529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.020 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.020 [2024-11-06 13:58:19.726281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.020 [2024-11-06 13:58:19.726310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.020 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.020 [2024-11-06 13:58:19.742913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.020 [2024-11-06 13:58:19.742942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.756169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.756197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.772069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.772101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.790110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.790141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.806778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.806809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.823149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.823179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.834915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.834943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.848449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.848480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.864166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.864197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.879913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.879944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.895163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.895194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.908412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.908442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.923982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.924012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.939181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.939211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.021 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.021 [2024-11-06 13:58:19.951221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.021 [2024-11-06 13:58:19.951259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:19.966883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:19.966912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:19.983133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:19.983164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:19.995050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:19.995079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.009296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.009326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.025513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.025544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.043068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.043097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.059408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.059437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.075080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.075109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.088975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.089005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.106326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.106355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 15792.00 IOPS, 123.38 MiB/s [2024-11-06T13:58:20.213Z] [2024-11-06 13:58:20.123338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.123366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.142137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.142167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.158875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.158904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.172568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.172598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.188470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.188499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.280 [2024-11-06 13:58:20.204260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.280 [2024-11-06 13:58:20.204290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.280 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.540 [2024-11-06 13:58:20.219936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.540 [2024-11-06 13:58:20.219967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.540 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.540 [2024-11-06 13:58:20.238166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.540 [2024-11-06 13:58:20.238195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.540 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.540 [2024-11-06 13:58:20.255284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.540 [2024-11-06 13:58:20.255314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.540 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.540 [2024-11-06 13:58:20.271685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.540 [2024-11-06 13:58:20.271717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.540 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.287572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.287602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.306767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.306797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.322773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.322803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.339341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.339371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.358294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.358323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.374906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.374935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.388556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.388586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.407024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.407054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.420970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.420999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.436464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.436494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.451498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.451528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.541 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.541 [2024-11-06 13:58:20.470648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.541 [2024-11-06 13:58:20.470678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.801 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.801 [2024-11-06 13:58:20.487630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.801 [2024-11-06 13:58:20.487660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.801 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.801 [2024-11-06 13:58:20.502758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.801 [2024-11-06 13:58:20.502787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.801 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.801 [2024-11-06 13:58:20.519088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.801 [2024-11-06 13:58:20.519117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.801 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.801 [2024-11-06 13:58:20.531863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.801 [2024-11-06 13:58:20.531902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.547554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.547584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.563141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.563170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.579500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.579530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.598529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.598560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.614993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.615021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.627276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.627320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.643052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.643081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.655387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.655417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.671279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.671308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.690584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.690614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.706907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.706939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:39.802 [2024-11-06 13:58:20.722754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.802 [2024-11-06 13:58:20.722784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.802 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.739387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.739416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.758103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.758132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.773300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.773332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.788814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.788846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.806447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.806478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.823158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.823186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.839425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.839454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.858547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.858576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.875193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.875220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.891276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.891303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.906646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.906676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.922573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.922606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.938742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.938775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.955265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.955296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.974511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.974542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.062 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.062 [2024-11-06 13:58:20.991603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.062 [2024-11-06 13:58:20.991633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.007395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.007425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.022815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.022845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.034591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.034620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.051190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.051219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.067021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.067050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.083660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.083689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.102690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.102720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 15821.00 IOPS, 123.60 MiB/s [2024-11-06T13:58:21.255Z] [2024-11-06 13:58:21.118594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.118625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.135756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.135786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.151300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.151328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.170552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.322 [2024-11-06 13:58:21.170581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.322 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.322 [2024-11-06 13:58:21.187019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.323 [2024-11-06 13:58:21.187047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.323 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.323 [2024-11-06 13:58:21.203301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.323 [2024-11-06 13:58:21.203330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.323 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.323 [2024-11-06 13:58:21.222169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.323 [2024-11-06 13:58:21.222199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.323 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.323 [2024-11-06 13:58:21.239557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.323 [2024-11-06 13:58:21.239587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.323 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.258601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.258630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.275160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.275189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.291253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.291281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.307145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.307310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.319638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.319667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.335476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.335506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.354531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.354562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.370810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.370841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.385101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.385130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.403035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.403065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.417317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.417346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.435194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.435224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.448694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.448726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.467244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.467272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.483218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.483254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.499340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.499369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.583 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.583 [2024-11-06 13:58:21.515140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.583 [2024-11-06 13:58:21.515169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.525515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.525546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.542918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.542949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.556974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.557003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.572608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.572638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.588188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.588218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.606327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.606356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.623258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.623287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.641803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.641833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.658992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.659022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.673006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.673036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.690834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.690877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.705320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.705350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.723036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.723065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.735365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.735394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.751310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.751341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.844 [2024-11-06 13:58:21.770184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.844 [2024-11-06 13:58:21.770213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.844 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.787088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.787117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.799922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.799953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.815736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.815767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.831189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.831218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.844791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.844822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.860743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.860773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.878656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.878686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.895504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.895533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.914284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.914313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.930696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.930726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.947794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.947825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.963331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.963362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.982347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.982376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:21.999784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:21.999815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:22.015370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:22.015399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.104 [2024-11-06 13:58:22.031375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.104 [2024-11-06 13:58:22.031403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.104 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.364 [2024-11-06 13:58:22.046652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.364 [2024-11-06 13:58:22.046680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.364 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.364 [2024-11-06 13:58:22.063158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.364 [2024-11-06 13:58:22.063187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.364 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.364 [2024-11-06 13:58:22.073726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.364 [2024-11-06 13:58:22.073755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.364 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.364 [2024-11-06 13:58:22.089509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.364 [2024-11-06 13:58:22.089538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.364 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.364 [2024-11-06 13:58:22.106741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.364 [2024-11-06 13:58:22.106770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.364 15815.33 IOPS, 123.56 MiB/s [2024-11-06T13:58:22.297Z] 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.364 [2024-11-06 13:58:22.122845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.364 [2024-11-06 13:58:22.122886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.364 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.364 [2024-11-06 13:58:22.135954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.364 [2024-11-06 13:58:22.135981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.151463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.151490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.170931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.170960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.184994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.185023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.201068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.201097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.218810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.218841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.233128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.233159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.250796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.250826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.264717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.264746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.365 [2024-11-06 13:58:22.282287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.365 [2024-11-06 13:58:22.282319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.365 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.299715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.299746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.315541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.315572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.333790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.333822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.350760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.350906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.365237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.365285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.380876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.380906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.396595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.396626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.412093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.412123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.427655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.427685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.446848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.446892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.460772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.460802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.478354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.478384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.495413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.495443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.511104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.511134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.524640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.524668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.540244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.540274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.625 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.625 [2024-11-06 13:58:22.556111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.625 [2024-11-06 13:58:22.556141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.885 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.885 [2024-11-06 13:58:22.571663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.885 [2024-11-06 13:58:22.571693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.885 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.885 [2024-11-06 13:58:22.590668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.885 [2024-11-06 13:58:22.590699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.885 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.885 [2024-11-06 13:58:22.606896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.885 [2024-11-06 13:58:22.606927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.885 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.885 [2024-11-06 13:58:22.622600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.885 [2024-11-06 13:58:22.622634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.885 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.885 [2024-11-06 13:58:22.638628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.885 [2024-11-06 13:58:22.638660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.885 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.885 [2024-11-06 13:58:22.655459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.885 [2024-11-06 13:58:22.655489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.674020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.674052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.691670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.691701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.707135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.707164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.721126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.721185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.738166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.738199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.755504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.755534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.772401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.772432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.791420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.791450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:41.886 [2024-11-06 13:58:22.807372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.886 [2024-11-06 13:58:22.807401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.886 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.826054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.826084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.843474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.843505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.862214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.862248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.879122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.879152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.891847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.891888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.910756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.910787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.927508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.927539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.946460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.946491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.963373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.963403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.982095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.982125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:22.999307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:22.999337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:23.018204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:23.018235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:23.035206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:23.035242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:23.051174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:23.051203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:23.062927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:23.062957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.146 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.146 [2024-11-06 13:58:23.077294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.146 [2024-11-06 13:58:23.077326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.094612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.094644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 15776.00 IOPS, 123.25 MiB/s [2024-11-06T13:58:23.339Z] [2024-11-06 13:58:23.111186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.111218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.123187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.123217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.142051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.142081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.159713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.159744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.177908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.177938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.194504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.194535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.210794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.210824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.223465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.223495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.239483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.239512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.257743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.257773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.274948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.274977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.288877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.288905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.307566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.307595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.406 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.406 [2024-11-06 13:58:23.326198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.406 [2024-11-06 13:58:23.326226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.407 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.342545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.342573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.666 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.359351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.359379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.666 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.377954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.377984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.666 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.394723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.394752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.666 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.410571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.410601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.666 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.427017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.427048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.666 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.440675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.440707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.666 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.458804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.458834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.666 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.666 [2024-11-06 13:58:23.472706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.666 [2024-11-06 13:58:23.472736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.667 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.667 [2024-11-06 13:58:23.491319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.667 [2024-11-06 13:58:23.491352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.667 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.667 [2024-11-06 13:58:23.507250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.667 [2024-11-06 13:58:23.507281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.667 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.667 [2024-11-06 13:58:23.524030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.667 [2024-11-06 13:58:23.524061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.667 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.667 [2024-11-06 13:58:23.540239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.667 [2024-11-06 13:58:23.540272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.667 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.667 [2024-11-06 13:58:23.556506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.667 [2024-11-06 13:58:23.556539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.667 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.667 [2024-11-06 13:58:23.572996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.667 [2024-11-06 13:58:23.573027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.667 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.667 [2024-11-06 13:58:23.589634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.667 [2024-11-06 13:58:23.589667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.667 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.926 [2024-11-06 13:58:23.606969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.926 [2024-11-06 13:58:23.607000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.619446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.619481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.635832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.635878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.651988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.652019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.671987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.672019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.691426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.691459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.708165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.708197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.727680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.727716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.744573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.744609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.761812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.762027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.779800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.779836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.797186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.797353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.815740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.815775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.832467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.832641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:42.927 [2024-11-06 13:58:23.851503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:42.927 [2024-11-06 13:58:23.851667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:42.927 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.187 [2024-11-06 13:58:23.867798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.187 [2024-11-06 13:58:23.867952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.187 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.187 [2024-11-06 13:58:23.887108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.187 [2024-11-06 13:58:23.887138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.187 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.187 [2024-11-06 13:58:23.901349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.187 [2024-11-06 13:58:23.901379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.187 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.187 [2024-11-06 13:58:23.918916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.187 [2024-11-06 13:58:23.918945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.187 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.187 [2024-11-06 13:58:23.932986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.187 [2024-11-06 13:58:23.933016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.187 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.187 [2024-11-06 13:58:23.949292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.187 [2024-11-06 13:58:23.949322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.187 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.187 [2024-11-06 13:58:23.966649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.187 [2024-11-06 13:58:23.966680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 [2024-11-06 13:58:23.982448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:23.982479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 [2024-11-06 13:58:23.999358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:23.999389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 [2024-11-06 13:58:24.016474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:24.016506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 [2024-11-06 13:58:24.034749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:24.034783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 [2024-11-06 13:58:24.051182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:24.051214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 [2024-11-06 13:58:24.063827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:24.063870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 [2024-11-06 13:58:24.080352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:24.080384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 [2024-11-06 13:58:24.097078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:24.097110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.188 15710.40 IOPS, 122.74 MiB/s [2024-11-06T13:58:24.121Z] [2024-11-06 13:58:24.112583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.188 [2024-11-06 13:58:24.112615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.188 00:28:43.188 Latency(us) 00:28:43.188 [2024-11-06T13:58:24.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.188 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:28:43.188 Nvme1n1 : 5.01 15711.86 122.75 0.00 0.00 8138.19 2105.57 14844.30 00:28:43.188 [2024-11-06T13:58:24.121Z] =================================================================================================================== 00:28:43.188 [2024-11-06T13:58:24.121Z] Total : 15711.86 122.75 0.00 0.00 8138.19 2105.57 14844.30 00:28:43.188 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.448 [2024-11-06 13:58:24.122951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.448 [2024-11-06 13:58:24.122981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.448 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.448 [2024-11-06 13:58:24.138959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.448 [2024-11-06 13:58:24.138991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.448 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.448 [2024-11-06 13:58:24.150948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.448 [2024-11-06 13:58:24.150975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.448 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.448 [2024-11-06 13:58:24.162941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.448 [2024-11-06 13:58:24.162965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.448 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.448 [2024-11-06 13:58:24.178928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.448 [2024-11-06 13:58:24.178949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.448 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.448 [2024-11-06 13:58:24.190937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.448 [2024-11-06 13:58:24.190961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.448 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.448 [2024-11-06 13:58:24.202938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.448 [2024-11-06 13:58:24.202960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.448 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.214935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.214957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.226937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.226960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.238937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.238961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.254951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.254995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.266942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.266972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.278939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.278961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.290965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.290997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.302944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.302972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.318928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.318950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.330936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.330958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.342934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.342955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.354935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.354957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.449 [2024-11-06 13:58:24.370942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.449 [2024-11-06 13:58:24.370972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.449 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.740 [2024-11-06 13:58:24.382951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.740 [2024-11-06 13:58:24.382984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.740 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.740 [2024-11-06 13:58:24.394999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.740 [2024-11-06 13:58:24.395043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.740 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.740 [2024-11-06 13:58:24.406952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:43.740 [2024-11-06 13:58:24.406982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:43.741 2024/11/06 13:58:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:43.741 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (105984) - No such process 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 105984 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:43.741 delay0 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.741 13:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:28:43.741 [2024-11-06 13:58:24.623343] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:51.866 Initializing NVMe Controllers 00:28:51.866 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.866 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:51.866 Initialization complete. Launching workers. 00:28:51.866 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 229, failed: 33396 00:28:51.866 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33487, failed to submit 138 00:28:51.866 success 33408, unsuccessful 79, failed 0 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.866 rmmod nvme_tcp 00:28:51.866 rmmod nvme_fabrics 00:28:51.866 rmmod nvme_keyring 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 105815 ']' 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 105815 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 105815 ']' 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 105815 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 105815 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:51.866 killing process with pid 105815 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 105815' 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 105815 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 105815 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:51.866 13:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:28:51.866 00:28:51.866 real 0m26.402s 00:28:51.866 user 0m37.823s 00:28:51.866 sys 0m11.319s 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:51.866 ************************************ 00:28:51.866 END TEST nvmf_zcopy 00:28:51.866 ************************************ 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:51.866 ************************************ 00:28:51.866 START TEST nvmf_nmic 00:28:51.866 ************************************ 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:28:51.866 * Looking for test storage... 00:28:51.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.866 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:51.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.867 --rc genhtml_branch_coverage=1 00:28:51.867 --rc genhtml_function_coverage=1 00:28:51.867 --rc genhtml_legend=1 00:28:51.867 --rc geninfo_all_blocks=1 00:28:51.867 --rc geninfo_unexecuted_blocks=1 00:28:51.867 00:28:51.867 ' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:51.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.867 --rc genhtml_branch_coverage=1 00:28:51.867 --rc genhtml_function_coverage=1 00:28:51.867 --rc genhtml_legend=1 00:28:51.867 --rc geninfo_all_blocks=1 00:28:51.867 --rc geninfo_unexecuted_blocks=1 00:28:51.867 00:28:51.867 ' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:51.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.867 --rc genhtml_branch_coverage=1 00:28:51.867 --rc genhtml_function_coverage=1 00:28:51.867 --rc genhtml_legend=1 00:28:51.867 --rc geninfo_all_blocks=1 00:28:51.867 --rc geninfo_unexecuted_blocks=1 00:28:51.867 00:28:51.867 ' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:51.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.867 --rc genhtml_branch_coverage=1 00:28:51.867 --rc genhtml_function_coverage=1 00:28:51.867 --rc genhtml_legend=1 00:28:51.867 --rc geninfo_all_blocks=1 00:28:51.867 --rc geninfo_unexecuted_blocks=1 00:28:51.867 00:28:51.867 ' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.867 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:51.868 Cannot find device "nvmf_init_br" 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:51.868 Cannot find device "nvmf_init_br2" 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:51.868 Cannot find device "nvmf_tgt_br" 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:51.868 Cannot find device "nvmf_tgt_br2" 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:51.868 Cannot find device "nvmf_init_br" 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:51.868 Cannot find device "nvmf_init_br2" 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:51.868 Cannot find device "nvmf_tgt_br" 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:28:51.868 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:52.129 Cannot find device "nvmf_tgt_br2" 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:52.129 Cannot find device "nvmf_br" 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:52.129 Cannot find device "nvmf_init_if" 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:52.129 Cannot find device "nvmf_init_if2" 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:52.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:52.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:52.129 13:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:52.129 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:52.389 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:52.389 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:28:52.389 00:28:52.389 --- 10.0.0.3 ping statistics --- 00:28:52.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.389 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:52.389 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:52.389 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:28:52.389 00:28:52.389 --- 10.0.0.4 ping statistics --- 00:28:52.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.389 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:52.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:28:52.389 00:28:52.389 --- 10.0.0.1 ping statistics --- 00:28:52.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.389 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:52.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:28:52.389 00:28:52.389 --- 10.0.0.2 ping statistics --- 00:28:52.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.389 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=106367 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 106367 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 106367 ']' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:52.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:52.389 13:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:52.389 [2024-11-06 13:58:33.318477] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:52.389 [2024-11-06 13:58:33.319493] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:28:52.389 [2024-11-06 13:58:33.319559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.649 [2024-11-06 13:58:33.474463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.649 [2024-11-06 13:58:33.541572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.649 [2024-11-06 13:58:33.541636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.649 [2024-11-06 13:58:33.541646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.649 [2024-11-06 13:58:33.541654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.649 [2024-11-06 13:58:33.541662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.649 [2024-11-06 13:58:33.543041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.649 [2024-11-06 13:58:33.543259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.649 [2024-11-06 13:58:33.544226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.649 [2024-11-06 13:58:33.544226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.911 [2024-11-06 13:58:33.676607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:52.911 [2024-11-06 13:58:33.676817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:52.911 [2024-11-06 13:58:33.677596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:52.911 [2024-11-06 13:58:33.677932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:52.911 [2024-11-06 13:58:33.678778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 [2024-11-06 13:58:34.277272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 Malloc0 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 [2024-11-06 13:58:34.385523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.481 test case1: single bdev can't be used in multiple subsystems 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.481 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.481 [2024-11-06 13:58:34.412913] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:28:53.481 [2024-11-06 13:58:34.412942] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:28:53.481 [2024-11-06 13:58:34.412952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.741 2024/11/06 13:58:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:53.741 request: 00:28:53.741 { 00:28:53.741 "method": "nvmf_subsystem_add_ns", 00:28:53.741 "params": { 00:28:53.741 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:28:53.741 "namespace": { 00:28:53.741 "bdev_name": "Malloc0", 00:28:53.741 "no_auto_visible": false 00:28:53.741 } 00:28:53.741 } 00:28:53.741 } 00:28:53.741 Got JSON-RPC error response 00:28:53.741 GoRPCClient: error on JSON-RPC call 00:28:53.741 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:53.741 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:28:53.741 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:28:53.742 Adding namespace failed - expected result. 00:28:53.742 test case2: host connect to nvmf target in multiple paths 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:53.742 [2024-11-06 13:58:34.425013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:28:53.742 13:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:28:56.279 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:28:56.279 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:28:56.279 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:28:56.279 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:28:56.279 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:28:56.279 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:28:56.279 13:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:56.279 [global] 00:28:56.279 thread=1 00:28:56.279 invalidate=1 00:28:56.279 rw=write 00:28:56.279 time_based=1 00:28:56.279 runtime=1 00:28:56.279 ioengine=libaio 00:28:56.279 direct=1 00:28:56.279 bs=4096 00:28:56.279 iodepth=1 00:28:56.279 norandommap=0 00:28:56.279 numjobs=1 00:28:56.279 00:28:56.279 verify_dump=1 00:28:56.279 verify_backlog=512 00:28:56.279 verify_state_save=0 00:28:56.279 do_verify=1 00:28:56.279 verify=crc32c-intel 00:28:56.279 [job0] 00:28:56.279 filename=/dev/nvme0n1 00:28:56.279 Could not set queue depth (nvme0n1) 00:28:56.279 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:56.279 fio-3.35 00:28:56.279 Starting 1 thread 00:28:57.250 00:28:57.250 job0: (groupid=0, jobs=1): err= 0: pid=106470: Wed Nov 6 13:58:37 2024 00:28:57.250 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1001msec) 00:28:57.250 slat (nsec): min=8509, max=26764, avg=9263.34, stdev=1073.07 00:28:57.250 clat (usec): min=110, max=1391, avg=130.50, stdev=32.51 00:28:57.250 lat (usec): min=119, max=1400, avg=139.76, stdev=32.78 00:28:57.250 clat percentiles (usec): 00:28:57.250 | 1.00th=[ 117], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 124], 00:28:57.250 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 130], 00:28:57.250 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 141], 00:28:57.250 | 99.00th=[ 149], 99.50th=[ 235], 99.90th=[ 603], 99.95th=[ 619], 00:28:57.250 | 99.99th=[ 1385] 00:28:57.250 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:28:57.250 slat (usec): min=11, max=140, avg=14.22, stdev= 5.72 00:28:57.250 clat (usec): min=74, max=495, avg=90.13, stdev= 9.51 00:28:57.250 lat (usec): min=87, max=508, avg=104.35, stdev=12.73 00:28:57.250 clat percentiles (usec): 00:28:57.250 | 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:28:57.250 | 30.00th=[ 87], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:28:57.250 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 98], 95.00th=[ 102], 00:28:57.250 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 174], 99.95th=[ 192], 00:28:57.250 | 99.99th=[ 494] 00:28:57.250 bw ( KiB/s): min=16351, max=16351, per=99.90%, avg=16351.00, stdev= 0.00, samples=1 00:28:57.250 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:28:57.250 lat (usec) : 100=47.06%, 250=52.73%, 500=0.15%, 750=0.04% 00:28:57.250 lat (msec) : 2=0.02% 00:28:57.250 cpu : usr=2.20%, sys=6.90%, ctx=8140, majf=0, minf=5 00:28:57.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:57.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.250 issued rwts: total=4044,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:57.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:57.250 00:28:57.250 Run status group 0 (all jobs): 00:28:57.250 READ: bw=15.8MiB/s (16.5MB/s), 15.8MiB/s-15.8MiB/s (16.5MB/s-16.5MB/s), io=15.8MiB (16.6MB), run=1001-1001msec 00:28:57.250 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:28:57.250 00:28:57.250 Disk stats (read/write): 00:28:57.250 nvme0n1: ios=3634/3723, merge=0/0, ticks=491/361, in_queue=852, util=91.37% 00:28:57.250 13:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:57.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.250 rmmod nvme_tcp 00:28:57.250 rmmod nvme_fabrics 00:28:57.250 rmmod nvme_keyring 00:28:57.250 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 106367 ']' 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 106367 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 106367 ']' 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 106367 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 106367 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 106367' 00:28:57.509 killing process with pid 106367 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 106367 00:28:57.509 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 106367 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:57.769 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:28:58.028 00:28:58.028 real 0m6.518s 00:28:58.028 user 0m15.125s 00:28:58.028 sys 0m2.780s 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:58.028 ************************************ 00:28:58.028 END TEST nvmf_nmic 00:28:58.028 ************************************ 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:58.028 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:58.287 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:58.287 ************************************ 00:28:58.287 START TEST nvmf_fio_target 00:28:58.287 ************************************ 00:28:58.287 13:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:28:58.287 * Looking for test storage... 00:28:58.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:58.287 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:58.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.288 --rc genhtml_branch_coverage=1 00:28:58.288 --rc genhtml_function_coverage=1 00:28:58.288 --rc genhtml_legend=1 00:28:58.288 --rc geninfo_all_blocks=1 00:28:58.288 --rc geninfo_unexecuted_blocks=1 00:28:58.288 00:28:58.288 ' 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:58.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.288 --rc genhtml_branch_coverage=1 00:28:58.288 --rc genhtml_function_coverage=1 00:28:58.288 --rc genhtml_legend=1 00:28:58.288 --rc geninfo_all_blocks=1 00:28:58.288 --rc geninfo_unexecuted_blocks=1 00:28:58.288 00:28:58.288 ' 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:58.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.288 --rc genhtml_branch_coverage=1 00:28:58.288 --rc genhtml_function_coverage=1 00:28:58.288 --rc genhtml_legend=1 00:28:58.288 --rc geninfo_all_blocks=1 00:28:58.288 --rc geninfo_unexecuted_blocks=1 00:28:58.288 00:28:58.288 ' 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:58.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.288 --rc genhtml_branch_coverage=1 00:28:58.288 --rc genhtml_function_coverage=1 00:28:58.288 --rc genhtml_legend=1 00:28:58.288 --rc geninfo_all_blocks=1 00:28:58.288 --rc geninfo_unexecuted_blocks=1 00:28:58.288 00:28:58.288 ' 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:58.288 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:58.548 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:58.549 Cannot find device "nvmf_init_br" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:58.549 Cannot find device "nvmf_init_br2" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:58.549 Cannot find device "nvmf_tgt_br" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:58.549 Cannot find device "nvmf_tgt_br2" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:58.549 Cannot find device "nvmf_init_br" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:58.549 Cannot find device "nvmf_init_br2" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:58.549 Cannot find device "nvmf_tgt_br" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:58.549 Cannot find device "nvmf_tgt_br2" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:58.549 Cannot find device "nvmf_br" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:58.549 Cannot find device "nvmf_init_if" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:58.549 Cannot find device "nvmf_init_if2" 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:58.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:58.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:28:58.549 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:58.809 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:59.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:59.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.150 ms 00:28:59.070 00:28:59.070 --- 10.0.0.3 ping statistics --- 00:28:59.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.070 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:59.070 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:59.070 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.107 ms 00:28:59.070 00:28:59.070 --- 10.0.0.4 ping statistics --- 00:28:59.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.070 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:59.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:28:59.070 00:28:59.070 --- 10.0.0.1 ping statistics --- 00:28:59.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.070 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:59.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:28:59.070 00:28:59.070 --- 10.0.0.2 ping statistics --- 00:28:59.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.070 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=106704 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 106704 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 106704 ']' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:59.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:59.070 13:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.070 [2024-11-06 13:58:39.966830] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:59.070 [2024-11-06 13:58:39.967923] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:28:59.070 [2024-11-06 13:58:39.967989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.330 [2024-11-06 13:58:40.119983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.330 [2024-11-06 13:58:40.188540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.330 [2024-11-06 13:58:40.188600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.330 [2024-11-06 13:58:40.188610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.330 [2024-11-06 13:58:40.188618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.330 [2024-11-06 13:58:40.188626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.330 [2024-11-06 13:58:40.190046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.330 [2024-11-06 13:58:40.190138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.330 [2024-11-06 13:58:40.190222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.330 [2024-11-06 13:58:40.190228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.588 [2024-11-06 13:58:40.324246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:59.588 [2024-11-06 13:58:40.324559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.588 [2024-11-06 13:58:40.324654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:59.588 [2024-11-06 13:58:40.324794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:59.588 [2024-11-06 13:58:40.325341] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:00.154 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:00.154 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:29:00.154 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.154 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:00.154 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.154 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.154 13:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:00.412 [2024-11-06 13:58:41.123359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.412 13:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:00.671 13:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:00.671 13:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:00.929 13:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:00.929 13:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:01.187 13:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:01.187 13:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:01.446 13:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:01.446 13:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:01.705 13:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:01.964 13:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:01.964 13:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:02.222 13:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:02.222 13:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:02.482 13:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:02.482 13:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:02.741 13:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:03.000 13:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:03.000 13:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:03.000 13:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:03.000 13:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:03.569 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:03.569 [2024-11-06 13:58:44.395346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:03.569 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:03.828 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:04.087 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:29:04.087 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:04.087 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:29:04.087 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:29:04.087 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:29:04.087 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:29:04.087 13:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:29:06.037 13:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:29:06.037 13:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:29:06.037 13:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:29:06.037 13:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:29:06.037 13:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:29:06.037 13:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:29:06.037 13:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:06.297 [global] 00:29:06.297 thread=1 00:29:06.297 invalidate=1 00:29:06.297 rw=write 00:29:06.297 time_based=1 00:29:06.297 runtime=1 00:29:06.297 ioengine=libaio 00:29:06.297 direct=1 00:29:06.297 bs=4096 00:29:06.297 iodepth=1 00:29:06.297 norandommap=0 00:29:06.297 numjobs=1 00:29:06.297 00:29:06.297 verify_dump=1 00:29:06.297 verify_backlog=512 00:29:06.297 verify_state_save=0 00:29:06.297 do_verify=1 00:29:06.297 verify=crc32c-intel 00:29:06.297 [job0] 00:29:06.297 filename=/dev/nvme0n1 00:29:06.297 [job1] 00:29:06.297 filename=/dev/nvme0n2 00:29:06.297 [job2] 00:29:06.297 filename=/dev/nvme0n3 00:29:06.297 [job3] 00:29:06.297 filename=/dev/nvme0n4 00:29:06.297 Could not set queue depth (nvme0n1) 00:29:06.297 Could not set queue depth (nvme0n2) 00:29:06.297 Could not set queue depth (nvme0n3) 00:29:06.297 Could not set queue depth (nvme0n4) 00:29:06.297 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:06.297 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:06.297 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:06.297 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:06.297 fio-3.35 00:29:06.297 Starting 4 threads 00:29:07.675 00:29:07.675 job0: (groupid=0, jobs=1): err= 0: pid=106992: Wed Nov 6 13:58:48 2024 00:29:07.675 read: IOPS=1031, BW=4128KiB/s (4227kB/s)(4132KiB/1001msec) 00:29:07.675 slat (nsec): min=10464, max=70820, avg=25324.52, stdev=8500.06 00:29:07.675 clat (usec): min=206, max=725, avg=432.29, stdev=86.86 00:29:07.675 lat (usec): min=239, max=745, avg=457.61, stdev=83.44 00:29:07.675 clat percentiles (usec): 00:29:07.675 | 1.00th=[ 245], 5.00th=[ 293], 10.00th=[ 334], 20.00th=[ 367], 00:29:07.675 | 30.00th=[ 388], 40.00th=[ 408], 50.00th=[ 424], 60.00th=[ 445], 00:29:07.675 | 70.00th=[ 465], 80.00th=[ 498], 90.00th=[ 545], 95.00th=[ 594], 00:29:07.675 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 717], 99.95th=[ 725], 00:29:07.675 | 99.99th=[ 725] 00:29:07.675 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:29:07.675 slat (usec): min=14, max=114, avg=37.95, stdev=12.69 00:29:07.675 clat (usec): min=145, max=6178, avg=301.39, stdev=160.45 00:29:07.675 lat (usec): min=196, max=6224, avg=339.34, stdev=160.72 00:29:07.675 clat percentiles (usec): 00:29:07.675 | 1.00th=[ 169], 5.00th=[ 202], 10.00th=[ 229], 20.00th=[ 251], 00:29:07.675 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 297], 60.00th=[ 310], 00:29:07.675 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 371], 95.00th=[ 396], 00:29:07.675 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 478], 99.95th=[ 6194], 00:29:07.675 | 99.99th=[ 6194] 00:29:07.675 bw ( KiB/s): min= 6624, max= 6624, per=22.61%, avg=6624.00, stdev= 0.00, samples=1 00:29:07.675 iops : min= 1656, max= 1656, avg=1656.00, stdev= 0.00, samples=1 00:29:07.675 lat (usec) : 250=11.87%, 500=80.73%, 750=7.36% 00:29:07.675 lat (msec) : 10=0.04% 00:29:07.675 cpu : usr=2.00%, sys=6.10%, ctx=2570, majf=0, minf=7 00:29:07.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.675 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:07.675 job1: (groupid=0, jobs=1): err= 0: pid=106993: Wed Nov 6 13:58:48 2024 00:29:07.675 read: IOPS=1749, BW=6997KiB/s (7165kB/s)(7004KiB/1001msec) 00:29:07.675 slat (nsec): min=8289, max=40440, avg=12787.96, stdev=5522.81 00:29:07.675 clat (usec): min=127, max=1362, avg=298.29, stdev=145.90 00:29:07.675 lat (usec): min=135, max=1375, avg=311.07, stdev=149.99 00:29:07.675 clat percentiles (usec): 00:29:07.675 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 176], 00:29:07.675 | 30.00th=[ 192], 40.00th=[ 210], 50.00th=[ 229], 60.00th=[ 262], 00:29:07.675 | 70.00th=[ 408], 80.00th=[ 465], 90.00th=[ 510], 95.00th=[ 545], 00:29:07.675 | 99.00th=[ 644], 99.50th=[ 693], 99.90th=[ 1287], 99.95th=[ 1369], 00:29:07.675 | 99.99th=[ 1369] 00:29:07.675 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:07.675 slat (usec): min=12, max=112, avg=18.92, stdev= 8.36 00:29:07.675 clat (usec): min=85, max=3913, avg=201.10, stdev=152.54 00:29:07.675 lat (usec): min=98, max=3927, avg=220.02, stdev=154.97 00:29:07.675 clat percentiles (usec): 00:29:07.675 | 1.00th=[ 101], 5.00th=[ 112], 10.00th=[ 118], 20.00th=[ 130], 00:29:07.675 | 30.00th=[ 145], 40.00th=[ 157], 50.00th=[ 172], 60.00th=[ 186], 00:29:07.675 | 70.00th=[ 217], 80.00th=[ 265], 90.00th=[ 326], 95.00th=[ 367], 00:29:07.675 | 99.00th=[ 433], 99.50th=[ 469], 99.90th=[ 2933], 99.95th=[ 3752], 00:29:07.675 | 99.99th=[ 3916] 00:29:07.675 bw ( KiB/s): min=12288, max=12288, per=41.93%, avg=12288.00, stdev= 0.00, samples=1 00:29:07.675 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:29:07.675 lat (usec) : 100=0.42%, 250=68.04%, 500=25.98%, 750=5.40%, 1000=0.03% 00:29:07.676 lat (msec) : 2=0.05%, 4=0.08% 00:29:07.676 cpu : usr=1.00%, sys=5.00%, ctx=3802, majf=0, minf=13 00:29:07.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.676 issued rwts: total=1751,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:07.676 job2: (groupid=0, jobs=1): err= 0: pid=106994: Wed Nov 6 13:58:48 2024 00:29:07.676 read: IOPS=1381, BW=5526KiB/s (5659kB/s)(5532KiB/1001msec) 00:29:07.676 slat (usec): min=10, max=118, avg=24.93, stdev=13.17 00:29:07.676 clat (usec): min=194, max=2620, avg=334.32, stdev=86.65 00:29:07.676 lat (usec): min=249, max=2633, avg=359.25, stdev=91.59 00:29:07.676 clat percentiles (usec): 00:29:07.676 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 281], 00:29:07.676 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 334], 00:29:07.676 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 441], 00:29:07.676 | 99.00th=[ 474], 99.50th=[ 502], 99.90th=[ 1037], 99.95th=[ 2606], 00:29:07.676 | 99.99th=[ 2606] 00:29:07.676 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:29:07.676 slat (usec): min=29, max=132, avg=50.56, stdev=14.82 00:29:07.676 clat (usec): min=164, max=721, avg=271.75, stdev=52.02 00:29:07.676 lat (usec): min=200, max=773, avg=322.30, stdev=55.63 00:29:07.676 clat percentiles (usec): 00:29:07.676 | 1.00th=[ 186], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 227], 00:29:07.676 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 265], 60.00th=[ 281], 00:29:07.676 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 355], 00:29:07.676 | 99.00th=[ 388], 99.50th=[ 429], 99.90th=[ 619], 99.95th=[ 725], 00:29:07.676 | 99.99th=[ 725] 00:29:07.676 bw ( KiB/s): min= 6360, max= 6360, per=21.70%, avg=6360.00, stdev= 0.00, samples=1 00:29:07.676 iops : min= 1590, max= 1590, avg=1590.00, stdev= 0.00, samples=1 00:29:07.676 lat (usec) : 250=21.48%, 500=78.07%, 750=0.38% 00:29:07.676 lat (msec) : 2=0.03%, 4=0.03% 00:29:07.676 cpu : usr=1.50%, sys=9.00%, ctx=2920, majf=0, minf=18 00:29:07.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.676 issued rwts: total=1383,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:07.676 job3: (groupid=0, jobs=1): err= 0: pid=106996: Wed Nov 6 13:58:48 2024 00:29:07.676 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:29:07.676 slat (nsec): min=8538, max=38150, avg=11393.88, stdev=3283.36 00:29:07.676 clat (usec): min=154, max=733, avg=250.90, stdev=43.41 00:29:07.676 lat (usec): min=163, max=743, avg=262.30, stdev=44.92 00:29:07.676 clat percentiles (usec): 00:29:07.676 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 210], 00:29:07.676 | 30.00th=[ 231], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 265], 00:29:07.676 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:29:07.676 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 644], 99.95th=[ 717], 00:29:07.676 | 99.99th=[ 734] 00:29:07.676 write: IOPS=2210, BW=8843KiB/s (9055kB/s)(8852KiB/1001msec); 0 zone resets 00:29:07.676 slat (usec): min=12, max=101, avg=18.48, stdev= 7.95 00:29:07.676 clat (usec): min=107, max=310, avg=188.48, stdev=34.83 00:29:07.676 lat (usec): min=121, max=356, avg=206.95, stdev=37.96 00:29:07.676 clat percentiles (usec): 00:29:07.676 | 1.00th=[ 121], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 153], 00:29:07.676 | 30.00th=[ 167], 40.00th=[ 180], 50.00th=[ 192], 60.00th=[ 200], 00:29:07.676 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 243], 00:29:07.676 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 306], 00:29:07.676 | 99.99th=[ 310] 00:29:07.676 bw ( KiB/s): min=10600, max=10600, per=36.17%, avg=10600.00, stdev= 0.00, samples=1 00:29:07.676 iops : min= 2650, max= 2650, avg=2650.00, stdev= 0.00, samples=1 00:29:07.676 lat (usec) : 250=71.27%, 500=28.66%, 750=0.07% 00:29:07.676 cpu : usr=1.30%, sys=4.80%, ctx=4261, majf=0, minf=11 00:29:07.676 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.676 issued rwts: total=2048,2213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.676 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:07.676 00:29:07.676 Run status group 0 (all jobs): 00:29:07.676 READ: bw=24.3MiB/s (25.4MB/s), 4128KiB/s-8184KiB/s (4227kB/s-8380kB/s), io=24.3MiB (25.5MB), run=1001-1001msec 00:29:07.676 WRITE: bw=28.6MiB/s (30.0MB/s), 6138KiB/s-8843KiB/s (6285kB/s-9055kB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:29:07.676 00:29:07.676 Disk stats (read/write): 00:29:07.676 nvme0n1: ios=1074/1129, merge=0/0, ticks=495/341, in_queue=836, util=88.38% 00:29:07.676 nvme0n2: ios=1585/1947, merge=0/0, ticks=442/378, in_queue=820, util=87.88% 00:29:07.676 nvme0n3: ios=1045/1470, merge=0/0, ticks=380/441, in_queue=821, util=89.08% 00:29:07.676 nvme0n4: ios=1694/2048, merge=0/0, ticks=412/394, in_queue=806, util=89.69% 00:29:07.676 13:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:07.676 [global] 00:29:07.676 thread=1 00:29:07.676 invalidate=1 00:29:07.676 rw=randwrite 00:29:07.676 time_based=1 00:29:07.676 runtime=1 00:29:07.676 ioengine=libaio 00:29:07.676 direct=1 00:29:07.676 bs=4096 00:29:07.676 iodepth=1 00:29:07.676 norandommap=0 00:29:07.676 numjobs=1 00:29:07.676 00:29:07.676 verify_dump=1 00:29:07.676 verify_backlog=512 00:29:07.676 verify_state_save=0 00:29:07.676 do_verify=1 00:29:07.676 verify=crc32c-intel 00:29:07.676 [job0] 00:29:07.676 filename=/dev/nvme0n1 00:29:07.676 [job1] 00:29:07.676 filename=/dev/nvme0n2 00:29:07.676 [job2] 00:29:07.676 filename=/dev/nvme0n3 00:29:07.676 [job3] 00:29:07.676 filename=/dev/nvme0n4 00:29:07.676 Could not set queue depth (nvme0n1) 00:29:07.676 Could not set queue depth (nvme0n2) 00:29:07.676 Could not set queue depth (nvme0n3) 00:29:07.676 Could not set queue depth (nvme0n4) 00:29:07.934 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:07.934 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:07.934 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:07.934 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:07.934 fio-3.35 00:29:07.934 Starting 4 threads 00:29:09.311 00:29:09.311 job0: (groupid=0, jobs=1): err= 0: pid=107054: Wed Nov 6 13:58:49 2024 00:29:09.311 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:29:09.311 slat (nsec): min=8394, max=22159, avg=9086.81, stdev=1062.60 00:29:09.311 clat (usec): min=178, max=628, avg=252.72, stdev=22.23 00:29:09.311 lat (usec): min=187, max=637, avg=261.81, stdev=22.26 00:29:09.311 clat percentiles (usec): 00:29:09.311 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:29:09.311 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:29:09.311 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:29:09.311 | 99.00th=[ 314], 99.50th=[ 318], 99.90th=[ 338], 99.95th=[ 351], 00:29:09.311 | 99.99th=[ 627] 00:29:09.311 write: IOPS=2278, BW=9115KiB/s (9334kB/s)(9124KiB/1001msec); 0 zone resets 00:29:09.311 slat (usec): min=12, max=108, avg=14.44, stdev= 6.15 00:29:09.311 clat (usec): min=117, max=444, avg=187.20, stdev=21.26 00:29:09.311 lat (usec): min=130, max=457, avg=201.65, stdev=23.24 00:29:09.311 clat percentiles (usec): 00:29:09.311 | 1.00th=[ 135], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 172], 00:29:09.311 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:29:09.311 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 223], 00:29:09.311 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 281], 99.95th=[ 379], 00:29:09.311 | 99.99th=[ 445] 00:29:09.311 bw ( KiB/s): min= 9088, max= 9088, per=35.04%, avg=9088.00, stdev= 0.00, samples=1 00:29:09.311 iops : min= 2272, max= 2272, avg=2272.00, stdev= 0.00, samples=1 00:29:09.311 lat (usec) : 250=74.82%, 500=25.16%, 750=0.02% 00:29:09.311 cpu : usr=0.90%, sys=4.00%, ctx=4329, majf=0, minf=9 00:29:09.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.311 issued rwts: total=2048,2281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:09.311 job1: (groupid=0, jobs=1): err= 0: pid=107055: Wed Nov 6 13:58:49 2024 00:29:09.311 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:29:09.311 slat (nsec): min=12046, max=43759, avg=17769.83, stdev=3428.48 00:29:09.311 clat (usec): min=249, max=751, avg=492.81, stdev=93.20 00:29:09.311 lat (usec): min=267, max=770, avg=510.58, stdev=93.70 00:29:09.311 clat percentiles (usec): 00:29:09.311 | 1.00th=[ 285], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 429], 00:29:09.311 | 30.00th=[ 461], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 510], 00:29:09.311 | 70.00th=[ 523], 80.00th=[ 545], 90.00th=[ 635], 95.00th=[ 676], 00:29:09.311 | 99.00th=[ 717], 99.50th=[ 725], 99.90th=[ 742], 99.95th=[ 750], 00:29:09.311 | 99.99th=[ 750] 00:29:09.311 write: IOPS=1273, BW=5095KiB/s (5217kB/s)(5100KiB/1001msec); 0 zone resets 00:29:09.311 slat (usec): min=13, max=101, avg=25.99, stdev= 7.85 00:29:09.311 clat (usec): min=214, max=1495, avg=345.14, stdev=54.50 00:29:09.311 lat (usec): min=247, max=1514, avg=371.12, stdev=55.15 00:29:09.311 clat percentiles (usec): 00:29:09.311 | 1.00th=[ 245], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 310], 00:29:09.311 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 359], 00:29:09.311 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 400], 95.00th=[ 412], 00:29:09.311 | 99.00th=[ 445], 99.50th=[ 465], 99.90th=[ 510], 99.95th=[ 1500], 00:29:09.311 | 99.99th=[ 1500] 00:29:09.311 bw ( KiB/s): min= 5040, max= 5040, per=19.43%, avg=5040.00, stdev= 0.00, samples=1 00:29:09.311 iops : min= 1260, max= 1260, avg=1260.00, stdev= 0.00, samples=1 00:29:09.311 lat (usec) : 250=0.91%, 500=77.64%, 750=21.36%, 1000=0.04% 00:29:09.311 lat (msec) : 2=0.04% 00:29:09.311 cpu : usr=0.50%, sys=4.60%, ctx=2299, majf=0, minf=18 00:29:09.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.311 issued rwts: total=1024,1275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:09.311 job2: (groupid=0, jobs=1): err= 0: pid=107056: Wed Nov 6 13:58:49 2024 00:29:09.311 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:29:09.311 slat (nsec): min=13095, max=97313, avg=24440.59, stdev=5797.91 00:29:09.311 clat (usec): min=231, max=3341, avg=322.98, stdev=86.59 00:29:09.311 lat (usec): min=252, max=3362, avg=347.42, stdev=87.19 00:29:09.311 clat percentiles (usec): 00:29:09.311 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:29:09.311 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 326], 00:29:09.311 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 371], 00:29:09.311 | 99.00th=[ 404], 99.50th=[ 502], 99.90th=[ 906], 99.95th=[ 3326], 00:29:09.311 | 99.99th=[ 3326] 00:29:09.311 write: IOPS=1658, BW=6633KiB/s (6793kB/s)(6640KiB/1001msec); 0 zone resets 00:29:09.311 slat (usec): min=24, max=102, avg=39.76, stdev= 8.66 00:29:09.311 clat (usec): min=169, max=435, avg=236.33, stdev=31.91 00:29:09.311 lat (usec): min=196, max=499, avg=276.10, stdev=34.58 00:29:09.311 clat percentiles (usec): 00:29:09.311 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 210], 00:29:09.311 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:29:09.311 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:29:09.311 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 371], 99.95th=[ 437], 00:29:09.311 | 99.99th=[ 437] 00:29:09.311 bw ( KiB/s): min= 8192, max= 8192, per=31.58%, avg=8192.00, stdev= 0.00, samples=1 00:29:09.311 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:09.311 lat (usec) : 250=36.95%, 500=62.80%, 750=0.16%, 1000=0.06% 00:29:09.311 lat (msec) : 4=0.03% 00:29:09.311 cpu : usr=2.20%, sys=7.60%, ctx=3199, majf=0, minf=14 00:29:09.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.311 issued rwts: total=1536,1660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:09.312 job3: (groupid=0, jobs=1): err= 0: pid=107057: Wed Nov 6 13:58:49 2024 00:29:09.312 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:29:09.312 slat (nsec): min=11394, max=36206, avg=17640.26, stdev=3499.24 00:29:09.312 clat (usec): min=269, max=717, avg=492.96, stdev=83.51 00:29:09.312 lat (usec): min=291, max=736, avg=510.60, stdev=84.16 00:29:09.312 clat percentiles (usec): 00:29:09.312 | 1.00th=[ 297], 5.00th=[ 351], 10.00th=[ 379], 20.00th=[ 429], 00:29:09.312 | 30.00th=[ 461], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 515], 00:29:09.312 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 611], 95.00th=[ 652], 00:29:09.312 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 717], 99.95th=[ 717], 00:29:09.312 | 99.99th=[ 717] 00:29:09.312 write: IOPS=1273, BW=5095KiB/s (5217kB/s)(5100KiB/1001msec); 0 zone resets 00:29:09.312 slat (usec): min=13, max=130, avg=27.15, stdev= 8.37 00:29:09.312 clat (usec): min=177, max=1590, avg=343.95, stdev=56.75 00:29:09.312 lat (usec): min=207, max=1615, avg=371.11, stdev=57.15 00:29:09.312 clat percentiles (usec): 00:29:09.312 | 1.00th=[ 243], 5.00th=[ 265], 10.00th=[ 285], 20.00th=[ 306], 00:29:09.312 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 355], 00:29:09.312 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 416], 00:29:09.312 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 519], 99.95th=[ 1598], 00:29:09.312 | 99.99th=[ 1598] 00:29:09.312 bw ( KiB/s): min= 5032, max= 5032, per=19.40%, avg=5032.00, stdev= 0.00, samples=1 00:29:09.312 iops : min= 1258, max= 1258, avg=1258.00, stdev= 0.00, samples=1 00:29:09.312 lat (usec) : 250=1.09%, 500=76.86%, 750=22.01% 00:29:09.312 lat (msec) : 2=0.04% 00:29:09.312 cpu : usr=1.20%, sys=4.00%, ctx=2301, majf=0, minf=7 00:29:09.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.312 issued rwts: total=1024,1275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:09.312 00:29:09.312 Run status group 0 (all jobs): 00:29:09.312 READ: bw=22.0MiB/s (23.0MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1001msec 00:29:09.312 WRITE: bw=25.3MiB/s (26.6MB/s), 5095KiB/s-9115KiB/s (5217kB/s-9334kB/s), io=25.4MiB (26.6MB), run=1001-1001msec 00:29:09.312 00:29:09.312 Disk stats (read/write): 00:29:09.312 nvme0n1: ios=1735/2048, merge=0/0, ticks=456/396, in_queue=852, util=87.76% 00:29:09.312 nvme0n2: ios=975/1024, merge=0/0, ticks=465/350, in_queue=815, util=87.58% 00:29:09.312 nvme0n3: ios=1224/1536, merge=0/0, ticks=439/382, in_queue=821, util=89.39% 00:29:09.312 nvme0n4: ios=925/1024, merge=0/0, ticks=446/345, in_queue=791, util=89.54% 00:29:09.312 13:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:09.312 [global] 00:29:09.312 thread=1 00:29:09.312 invalidate=1 00:29:09.312 rw=write 00:29:09.312 time_based=1 00:29:09.312 runtime=1 00:29:09.312 ioengine=libaio 00:29:09.312 direct=1 00:29:09.312 bs=4096 00:29:09.312 iodepth=128 00:29:09.312 norandommap=0 00:29:09.312 numjobs=1 00:29:09.312 00:29:09.312 verify_dump=1 00:29:09.312 verify_backlog=512 00:29:09.312 verify_state_save=0 00:29:09.312 do_verify=1 00:29:09.312 verify=crc32c-intel 00:29:09.312 [job0] 00:29:09.312 filename=/dev/nvme0n1 00:29:09.312 [job1] 00:29:09.312 filename=/dev/nvme0n2 00:29:09.312 [job2] 00:29:09.312 filename=/dev/nvme0n3 00:29:09.312 [job3] 00:29:09.312 filename=/dev/nvme0n4 00:29:09.312 Could not set queue depth (nvme0n1) 00:29:09.312 Could not set queue depth (nvme0n2) 00:29:09.312 Could not set queue depth (nvme0n3) 00:29:09.312 Could not set queue depth (nvme0n4) 00:29:09.312 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:09.312 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:09.312 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:09.312 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:09.312 fio-3.35 00:29:09.312 Starting 4 threads 00:29:10.692 00:29:10.692 job0: (groupid=0, jobs=1): err= 0: pid=107112: Wed Nov 6 13:58:51 2024 00:29:10.692 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:29:10.692 slat (usec): min=15, max=5267, avg=148.45, stdev=680.53 00:29:10.692 clat (usec): min=13888, max=24615, avg=19228.76, stdev=1538.65 00:29:10.692 lat (usec): min=13936, max=24636, avg=19377.21, stdev=1432.32 00:29:10.692 clat percentiles (usec): 00:29:10.692 | 1.00th=[14877], 5.00th=[16581], 10.00th=[17433], 20.00th=[17957], 00:29:10.692 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19530], 60.00th=[19792], 00:29:10.692 | 70.00th=[20055], 80.00th=[20317], 90.00th=[21103], 95.00th=[21365], 00:29:10.692 | 99.00th=[22414], 99.50th=[22938], 99.90th=[24511], 99.95th=[24511], 00:29:10.692 | 99.99th=[24511] 00:29:10.692 write: IOPS=3502, BW=13.7MiB/s (14.3MB/s)(13.8MiB/1005msec); 0 zone resets 00:29:10.692 slat (usec): min=23, max=6707, avg=143.07, stdev=533.00 00:29:10.692 clat (usec): min=3384, max=26864, avg=19144.64, stdev=2988.69 00:29:10.692 lat (usec): min=4325, max=26896, avg=19287.71, stdev=2985.33 00:29:10.692 clat percentiles (usec): 00:29:10.692 | 1.00th=[ 9503], 5.00th=[14877], 10.00th=[16057], 20.00th=[16909], 00:29:10.692 | 30.00th=[17957], 40.00th=[18482], 50.00th=[19006], 60.00th=[19530], 00:29:10.692 | 70.00th=[20317], 80.00th=[21103], 90.00th=[22676], 95.00th=[25297], 00:29:10.692 | 99.00th=[25560], 99.50th=[26346], 99.90th=[26870], 99.95th=[26870], 00:29:10.692 | 99.99th=[26870] 00:29:10.692 bw ( KiB/s): min=12953, max=14208, per=30.63%, avg=13580.50, stdev=887.42, samples=2 00:29:10.692 iops : min= 3238, max= 3552, avg=3395.00, stdev=222.03, samples=2 00:29:10.692 lat (msec) : 4=0.02%, 10=0.61%, 20=65.91%, 50=33.46% 00:29:10.692 cpu : usr=4.08%, sys=14.34%, ctx=471, majf=0, minf=17 00:29:10.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:29:10.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:10.692 issued rwts: total=3072,3520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:10.692 job1: (groupid=0, jobs=1): err= 0: pid=107113: Wed Nov 6 13:58:51 2024 00:29:10.692 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:29:10.692 slat (usec): min=7, max=6677, avg=150.38, stdev=707.46 00:29:10.692 clat (usec): min=13689, max=29007, avg=19628.86, stdev=2140.73 00:29:10.692 lat (usec): min=13722, max=30152, avg=19779.24, stdev=2191.28 00:29:10.692 clat percentiles (usec): 00:29:10.692 | 1.00th=[14615], 5.00th=[16188], 10.00th=[16909], 20.00th=[17695], 00:29:10.692 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19530], 60.00th=[20317], 00:29:10.692 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22676], 95.00th=[23462], 00:29:10.692 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26084], 99.95th=[28967], 00:29:10.692 | 99.99th=[28967] 00:29:10.692 write: IOPS=3511, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1003msec); 0 zone resets 00:29:10.692 slat (usec): min=15, max=8267, avg=141.29, stdev=654.09 00:29:10.692 clat (usec): min=546, max=26075, avg=18814.38, stdev=3103.57 00:29:10.692 lat (usec): min=5166, max=27729, avg=18955.67, stdev=3085.79 00:29:10.692 clat percentiles (usec): 00:29:10.692 | 1.00th=[ 6521], 5.00th=[13698], 10.00th=[15139], 20.00th=[17171], 00:29:10.692 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18744], 60.00th=[19530], 00:29:10.692 | 70.00th=[19792], 80.00th=[21103], 90.00th=[22152], 95.00th=[24249], 00:29:10.692 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:29:10.692 | 99.99th=[26084] 00:29:10.692 bw ( KiB/s): min=12560, max=14592, per=30.62%, avg=13576.00, stdev=1436.84, samples=2 00:29:10.692 iops : min= 3140, max= 3648, avg=3394.00, stdev=359.21, samples=2 00:29:10.692 lat (usec) : 750=0.02% 00:29:10.692 lat (msec) : 10=0.74%, 20=62.27%, 50=36.97% 00:29:10.692 cpu : usr=4.89%, sys=13.77%, ctx=319, majf=0, minf=7 00:29:10.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:29:10.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:10.692 issued rwts: total=3072,3522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:10.692 job2: (groupid=0, jobs=1): err= 0: pid=107114: Wed Nov 6 13:58:51 2024 00:29:10.692 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8184KiB/1005msec) 00:29:10.692 slat (usec): min=8, max=9439, avg=262.27, stdev=1009.15 00:29:10.692 clat (usec): min=1863, max=42844, avg=32588.66, stdev=4830.15 00:29:10.692 lat (usec): min=6036, max=42866, avg=32850.93, stdev=4803.86 00:29:10.692 clat percentiles (usec): 00:29:10.692 | 1.00th=[11207], 5.00th=[26084], 10.00th=[28705], 20.00th=[30016], 00:29:10.692 | 30.00th=[31327], 40.00th=[32375], 50.00th=[33162], 60.00th=[33817], 00:29:10.692 | 70.00th=[34866], 80.00th=[35914], 90.00th=[37487], 95.00th=[38536], 00:29:10.692 | 99.00th=[40109], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:10.692 | 99.99th=[42730] 00:29:10.692 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:29:10.692 slat (usec): min=10, max=8170, avg=214.78, stdev=814.16 00:29:10.692 clat (usec): min=15619, max=38651, avg=29307.90, stdev=5490.45 00:29:10.692 lat (usec): min=15667, max=38685, avg=29522.68, stdev=5482.49 00:29:10.692 clat percentiles (usec): 00:29:10.692 | 1.00th=[16188], 5.00th=[17171], 10.00th=[20055], 20.00th=[25297], 00:29:10.692 | 30.00th=[27919], 40.00th=[29754], 50.00th=[31065], 60.00th=[31851], 00:29:10.692 | 70.00th=[32375], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:29:10.692 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:29:10.692 | 99.99th=[38536] 00:29:10.692 bw ( KiB/s): min= 8144, max= 8256, per=18.50%, avg=8200.00, stdev=79.20, samples=2 00:29:10.692 iops : min= 2036, max= 2064, avg=2050.00, stdev=19.80, samples=2 00:29:10.692 lat (msec) : 2=0.02%, 10=0.34%, 20=5.86%, 50=93.77% 00:29:10.692 cpu : usr=2.89%, sys=9.06%, ctx=606, majf=0, minf=14 00:29:10.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:10.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:10.692 issued rwts: total=2046,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:10.692 job3: (groupid=0, jobs=1): err= 0: pid=107115: Wed Nov 6 13:58:51 2024 00:29:10.692 read: IOPS=1943, BW=7773KiB/s (7959kB/s)(7796KiB/1003msec) 00:29:10.692 slat (usec): min=7, max=10233, avg=263.77, stdev=1001.31 00:29:10.692 clat (usec): min=2202, max=43381, avg=32876.01, stdev=5408.90 00:29:10.692 lat (usec): min=5900, max=43405, avg=33139.78, stdev=5392.74 00:29:10.692 clat percentiles (usec): 00:29:10.692 | 1.00th=[ 6915], 5.00th=[26346], 10.00th=[28443], 20.00th=[30802], 00:29:10.692 | 30.00th=[31851], 40.00th=[32900], 50.00th=[33817], 60.00th=[34866], 00:29:10.692 | 70.00th=[35390], 80.00th=[36439], 90.00th=[37487], 95.00th=[39060], 00:29:10.692 | 99.00th=[40633], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:29:10.692 | 99.99th=[43254] 00:29:10.692 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:29:10.692 slat (usec): min=7, max=9025, avg=225.33, stdev=830.60 00:29:10.692 clat (usec): min=13407, max=40645, avg=30353.86, stdev=5002.06 00:29:10.692 lat (usec): min=15586, max=41439, avg=30579.19, stdev=4990.07 00:29:10.692 clat percentiles (usec): 00:29:10.692 | 1.00th=[17957], 5.00th=[19006], 10.00th=[22152], 20.00th=[27657], 00:29:10.692 | 30.00th=[29754], 40.00th=[31327], 50.00th=[31851], 60.00th=[32375], 00:29:10.692 | 70.00th=[33162], 80.00th=[33817], 90.00th=[35390], 95.00th=[35914], 00:29:10.692 | 99.00th=[38536], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:29:10.692 | 99.99th=[40633] 00:29:10.692 bw ( KiB/s): min= 8192, max= 8192, per=18.48%, avg=8192.00, stdev= 0.00, samples=2 00:29:10.692 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:29:10.692 lat (msec) : 4=0.03%, 10=0.78%, 20=5.25%, 50=93.95% 00:29:10.692 cpu : usr=2.69%, sys=8.48%, ctx=607, majf=0, minf=10 00:29:10.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:10.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:10.692 issued rwts: total=1949,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:10.692 00:29:10.692 Run status group 0 (all jobs): 00:29:10.692 READ: bw=39.4MiB/s (41.3MB/s), 7773KiB/s-12.0MiB/s (7959kB/s-12.5MB/s), io=39.6MiB (41.5MB), run=1003-1005msec 00:29:10.692 WRITE: bw=43.3MiB/s (45.4MB/s), 8151KiB/s-13.7MiB/s (8347kB/s-14.4MB/s), io=43.5MiB (45.6MB), run=1003-1005msec 00:29:10.692 00:29:10.692 Disk stats (read/write): 00:29:10.692 nvme0n1: ios=2609/2977, merge=0/0, ticks=11434/13052, in_queue=24486, util=86.75% 00:29:10.692 nvme0n2: ios=2587/2955, merge=0/0, ticks=15673/16122, in_queue=31795, util=87.98% 00:29:10.692 nvme0n3: ios=1536/1977, merge=0/0, ticks=12138/12536, in_queue=24674, util=88.46% 00:29:10.692 nvme0n4: ios=1536/1842, merge=0/0, ticks=12421/12248, in_queue=24669, util=88.40% 00:29:10.692 13:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:10.692 [global] 00:29:10.692 thread=1 00:29:10.692 invalidate=1 00:29:10.692 rw=randwrite 00:29:10.692 time_based=1 00:29:10.692 runtime=1 00:29:10.692 ioengine=libaio 00:29:10.692 direct=1 00:29:10.692 bs=4096 00:29:10.692 iodepth=128 00:29:10.692 norandommap=0 00:29:10.692 numjobs=1 00:29:10.692 00:29:10.692 verify_dump=1 00:29:10.692 verify_backlog=512 00:29:10.692 verify_state_save=0 00:29:10.692 do_verify=1 00:29:10.692 verify=crc32c-intel 00:29:10.692 [job0] 00:29:10.692 filename=/dev/nvme0n1 00:29:10.692 [job1] 00:29:10.692 filename=/dev/nvme0n2 00:29:10.692 [job2] 00:29:10.692 filename=/dev/nvme0n3 00:29:10.692 [job3] 00:29:10.692 filename=/dev/nvme0n4 00:29:10.692 Could not set queue depth (nvme0n1) 00:29:10.692 Could not set queue depth (nvme0n2) 00:29:10.693 Could not set queue depth (nvme0n3) 00:29:10.693 Could not set queue depth (nvme0n4) 00:29:10.693 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:10.693 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:10.693 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:10.693 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:10.693 fio-3.35 00:29:10.693 Starting 4 threads 00:29:12.073 00:29:12.073 job0: (groupid=0, jobs=1): err= 0: pid=107178: Wed Nov 6 13:58:52 2024 00:29:12.073 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:29:12.073 slat (usec): min=9, max=12301, avg=305.53, stdev=1511.45 00:29:12.073 clat (usec): min=9521, max=50780, avg=38168.97, stdev=5645.00 00:29:12.073 lat (usec): min=9540, max=51472, avg=38474.50, stdev=5507.21 00:29:12.073 clat percentiles (usec): 00:29:12.073 | 1.00th=[16057], 5.00th=[28705], 10.00th=[34341], 20.00th=[35914], 00:29:12.073 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[38536], 00:29:12.073 | 70.00th=[39584], 80.00th=[41681], 90.00th=[45876], 95.00th=[47449], 00:29:12.073 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:29:12.073 | 99.99th=[50594] 00:29:12.073 write: IOPS=1564, BW=6257KiB/s (6407kB/s)(6276KiB/1003msec); 0 zone resets 00:29:12.073 slat (usec): min=24, max=10562, avg=326.57, stdev=1349.48 00:29:12.073 clat (usec): min=1607, max=61746, avg=42518.46, stdev=10839.72 00:29:12.073 lat (usec): min=2671, max=61778, avg=42845.03, stdev=10832.56 00:29:12.073 clat percentiles (usec): 00:29:12.073 | 1.00th=[ 8717], 5.00th=[32637], 10.00th=[34866], 20.00th=[35914], 00:29:12.073 | 30.00th=[36439], 40.00th=[36963], 50.00th=[38011], 60.00th=[40109], 00:29:12.073 | 70.00th=[47973], 80.00th=[56361], 90.00th=[58459], 95.00th=[60031], 00:29:12.073 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:29:12.073 | 99.99th=[61604] 00:29:12.073 bw ( KiB/s): min= 4096, max= 8208, per=13.97%, avg=6152.00, stdev=2907.62, samples=2 00:29:12.073 iops : min= 1024, max= 2052, avg=1538.00, stdev=726.91, samples=2 00:29:12.073 lat (msec) : 2=0.03%, 4=0.23%, 10=1.03%, 20=1.03%, 50=83.29% 00:29:12.073 lat (msec) : 100=14.40% 00:29:12.073 cpu : usr=2.40%, sys=6.19%, ctx=159, majf=0, minf=17 00:29:12.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:29:12.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.073 issued rwts: total=1536,1569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.073 job1: (groupid=0, jobs=1): err= 0: pid=107179: Wed Nov 6 13:58:52 2024 00:29:12.073 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:29:12.073 slat (usec): min=9, max=4788, avg=119.79, stdev=546.97 00:29:12.073 clat (usec): min=7368, max=21372, avg=15646.97, stdev=1866.09 00:29:12.074 lat (usec): min=7387, max=21394, avg=15766.76, stdev=1886.45 00:29:12.074 clat percentiles (usec): 00:29:12.074 | 1.00th=[11731], 5.00th=[12649], 10.00th=[13304], 20.00th=[14091], 00:29:12.074 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15664], 60.00th=[16188], 00:29:12.074 | 70.00th=[16712], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:29:12.074 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20841], 99.95th=[21365], 00:29:12.074 | 99.99th=[21365] 00:29:12.074 write: IOPS=4130, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1002msec); 0 zone resets 00:29:12.074 slat (usec): min=23, max=4413, avg=111.27, stdev=403.47 00:29:12.074 clat (usec): min=1615, max=20833, avg=15090.27, stdev=1835.84 00:29:12.074 lat (usec): min=1646, max=20865, avg=15201.54, stdev=1838.05 00:29:12.074 clat percentiles (usec): 00:29:12.074 | 1.00th=[ 7177], 5.00th=[12387], 10.00th=[13435], 20.00th=[14222], 00:29:12.074 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:29:12.074 | 70.00th=[15795], 80.00th=[16319], 90.00th=[16712], 95.00th=[17695], 00:29:12.074 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20317], 99.95th=[20579], 00:29:12.074 | 99.99th=[20841] 00:29:12.074 bw ( KiB/s): min=16384, max=16416, per=37.24%, avg=16400.00, stdev=22.63, samples=2 00:29:12.074 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:29:12.074 lat (msec) : 2=0.12%, 4=0.11%, 10=0.51%, 20=98.70%, 50=0.56% 00:29:12.074 cpu : usr=4.90%, sys=16.58%, ctx=521, majf=0, minf=16 00:29:12.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:12.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.074 issued rwts: total=4096,4139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.074 job2: (groupid=0, jobs=1): err= 0: pid=107180: Wed Nov 6 13:58:52 2024 00:29:12.074 read: IOPS=1690, BW=6761KiB/s (6923kB/s)(6788KiB/1004msec) 00:29:12.074 slat (usec): min=19, max=14151, avg=273.13, stdev=1413.89 00:29:12.074 clat (usec): min=1479, max=42990, avg=33663.16, stdev=5918.58 00:29:12.074 lat (usec): min=8765, max=50621, avg=33936.28, stdev=5793.79 00:29:12.074 clat percentiles (usec): 00:29:12.074 | 1.00th=[ 9241], 5.00th=[27132], 10.00th=[28181], 20.00th=[29230], 00:29:12.074 | 30.00th=[29754], 40.00th=[33424], 50.00th=[35914], 60.00th=[36963], 00:29:12.074 | 70.00th=[37487], 80.00th=[37487], 90.00th=[39060], 95.00th=[41157], 00:29:12.074 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[42730], 00:29:12.074 | 99.99th=[42730] 00:29:12.074 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:29:12.074 slat (usec): min=24, max=10348, avg=251.54, stdev=1245.09 00:29:12.074 clat (usec): min=20861, max=43522, avg=33598.04, stdev=4557.27 00:29:12.074 lat (usec): min=24498, max=51716, avg=33849.58, stdev=4432.74 00:29:12.074 clat percentiles (usec): 00:29:12.074 | 1.00th=[24511], 5.00th=[27132], 10.00th=[27657], 20.00th=[30016], 00:29:12.074 | 30.00th=[31327], 40.00th=[32113], 50.00th=[32900], 60.00th=[34341], 00:29:12.074 | 70.00th=[35914], 80.00th=[36963], 90.00th=[40109], 95.00th=[42206], 00:29:12.074 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:29:12.074 | 99.99th=[43779] 00:29:12.074 bw ( KiB/s): min= 8192, max= 8192, per=18.60%, avg=8192.00, stdev= 0.00, samples=2 00:29:12.074 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:29:12.074 lat (msec) : 2=0.03%, 10=0.85%, 20=0.85%, 50=98.26% 00:29:12.074 cpu : usr=2.39%, sys=8.08%, ctx=152, majf=0, minf=9 00:29:12.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:29:12.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.074 issued rwts: total=1697,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.074 job3: (groupid=0, jobs=1): err= 0: pid=107181: Wed Nov 6 13:58:52 2024 00:29:12.074 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:29:12.074 slat (usec): min=9, max=6651, avg=152.28, stdev=786.79 00:29:12.074 clat (usec): min=15326, max=27548, avg=20508.17, stdev=1664.12 00:29:12.074 lat (usec): min=15506, max=28313, avg=20660.45, stdev=1752.45 00:29:12.074 clat percentiles (usec): 00:29:12.074 | 1.00th=[16712], 5.00th=[17695], 10.00th=[18220], 20.00th=[19268], 00:29:12.074 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[20841], 00:29:12.074 | 70.00th=[21365], 80.00th=[21890], 90.00th=[22414], 95.00th=[22938], 00:29:12.074 | 99.00th=[24511], 99.50th=[25822], 99.90th=[27395], 99.95th=[27395], 00:29:12.074 | 99.99th=[27657] 00:29:12.074 write: IOPS=3291, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1002msec); 0 zone resets 00:29:12.074 slat (usec): min=23, max=6976, avg=149.09, stdev=747.89 00:29:12.074 clat (usec): min=591, max=28435, avg=19176.91, stdev=2834.46 00:29:12.074 lat (usec): min=5641, max=28493, avg=19326.00, stdev=2804.32 00:29:12.074 clat percentiles (usec): 00:29:12.074 | 1.00th=[ 6849], 5.00th=[14222], 10.00th=[15139], 20.00th=[17433], 00:29:12.074 | 30.00th=[18482], 40.00th=[19268], 50.00th=[19792], 60.00th=[20055], 00:29:12.074 | 70.00th=[20841], 80.00th=[21103], 90.00th=[22152], 95.00th=[22676], 00:29:12.074 | 99.00th=[24249], 99.50th=[24511], 99.90th=[26608], 99.95th=[27657], 00:29:12.074 | 99.99th=[28443] 00:29:12.074 bw ( KiB/s): min=12368, max=13016, per=28.82%, avg=12692.00, stdev=458.21, samples=2 00:29:12.074 iops : min= 3092, max= 3254, avg=3173.00, stdev=114.55, samples=2 00:29:12.074 lat (usec) : 750=0.02% 00:29:12.074 lat (msec) : 10=0.66%, 20=46.56%, 50=52.76% 00:29:12.074 cpu : usr=4.50%, sys=13.69%, ctx=235, majf=0, minf=10 00:29:12.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:29:12.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.074 issued rwts: total=3072,3298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.074 00:29:12.074 Run status group 0 (all jobs): 00:29:12.074 READ: bw=40.5MiB/s (42.4MB/s), 6126KiB/s-16.0MiB/s (6273kB/s-16.7MB/s), io=40.6MiB (42.6MB), run=1002-1004msec 00:29:12.074 WRITE: bw=43.0MiB/s (45.1MB/s), 6257KiB/s-16.1MiB/s (6407kB/s-16.9MB/s), io=43.2MiB (45.3MB), run=1002-1004msec 00:29:12.074 00:29:12.074 Disk stats (read/write): 00:29:12.074 nvme0n1: ios=1195/1536, merge=0/0, ticks=10617/15852, in_queue=26469, util=89.28% 00:29:12.074 nvme0n2: ios=3633/3615, merge=0/0, ticks=17122/15692, in_queue=32814, util=90.51% 00:29:12.074 nvme0n3: ios=1579/1696, merge=0/0, ticks=12989/12827, in_queue=25816, util=91.19% 00:29:12.074 nvme0n4: ios=2598/2952, merge=0/0, ticks=15933/16083, in_queue=32016, util=91.14% 00:29:12.074 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:12.074 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=107194 00:29:12.074 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:12.074 13:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:12.074 [global] 00:29:12.074 thread=1 00:29:12.074 invalidate=1 00:29:12.074 rw=read 00:29:12.074 time_based=1 00:29:12.074 runtime=10 00:29:12.074 ioengine=libaio 00:29:12.074 direct=1 00:29:12.074 bs=4096 00:29:12.074 iodepth=1 00:29:12.074 norandommap=1 00:29:12.074 numjobs=1 00:29:12.074 00:29:12.074 [job0] 00:29:12.074 filename=/dev/nvme0n1 00:29:12.074 [job1] 00:29:12.074 filename=/dev/nvme0n2 00:29:12.074 [job2] 00:29:12.074 filename=/dev/nvme0n3 00:29:12.074 [job3] 00:29:12.074 filename=/dev/nvme0n4 00:29:12.074 Could not set queue depth (nvme0n1) 00:29:12.074 Could not set queue depth (nvme0n2) 00:29:12.074 Could not set queue depth (nvme0n3) 00:29:12.074 Could not set queue depth (nvme0n4) 00:29:12.333 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:12.333 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:12.333 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:12.333 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:12.333 fio-3.35 00:29:12.333 Starting 4 threads 00:29:14.938 13:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:15.197 fio: pid=107238, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:15.197 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=38264832, buflen=4096 00:29:15.197 13:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:15.456 fio: pid=107237, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:15.456 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=25751552, buflen=4096 00:29:15.456 13:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:15.456 13:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:15.716 fio: pid=107235, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:15.716 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=29122560, buflen=4096 00:29:15.716 13:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:15.716 13:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:15.976 fio: pid=107236, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:15.976 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50085888, buflen=4096 00:29:15.976 00:29:15.976 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107235: Wed Nov 6 13:58:56 2024 00:29:15.976 read: IOPS=2171, BW=8687KiB/s (8895kB/s)(27.8MiB/3274msec) 00:29:15.976 slat (usec): min=8, max=10613, avg=35.20, stdev=215.55 00:29:15.976 clat (usec): min=140, max=4825, avg=422.63, stdev=124.63 00:29:15.976 lat (usec): min=150, max=10939, avg=457.84, stdev=250.36 00:29:15.976 clat percentiles (usec): 00:29:15.976 | 1.00th=[ 184], 5.00th=[ 231], 10.00th=[ 262], 20.00th=[ 383], 00:29:15.976 | 30.00th=[ 404], 40.00th=[ 424], 50.00th=[ 441], 60.00th=[ 453], 00:29:15.976 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 494], 95.00th=[ 515], 00:29:15.976 | 99.00th=[ 644], 99.50th=[ 799], 99.90th=[ 1516], 99.95th=[ 2245], 00:29:15.976 | 99.99th=[ 4817] 00:29:15.976 bw ( KiB/s): min= 8136, max= 9017, per=21.19%, avg=8350.67, stdev=331.63, samples=6 00:29:15.976 iops : min= 2034, max= 2254, avg=2087.50, stdev=82.84, samples=6 00:29:15.976 lat (usec) : 250=8.65%, 500=83.34%, 750=7.41%, 1000=0.28% 00:29:15.976 lat (msec) : 2=0.25%, 4=0.03%, 10=0.03% 00:29:15.976 cpu : usr=1.25%, sys=5.56%, ctx=7118, majf=0, minf=1 00:29:15.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.976 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.976 issued rwts: total=7111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:15.976 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107236: Wed Nov 6 13:58:56 2024 00:29:15.976 read: IOPS=3444, BW=13.5MiB/s (14.1MB/s)(47.8MiB/3550msec) 00:29:15.976 slat (usec): min=8, max=12375, avg=18.05, stdev=192.40 00:29:15.976 clat (usec): min=117, max=43089, avg=271.03, stdev=397.71 00:29:15.976 lat (usec): min=127, max=43099, avg=289.08, stdev=442.25 00:29:15.976 clat percentiles (usec): 00:29:15.976 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 157], 20.00th=[ 241], 00:29:15.976 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:29:15.976 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:29:15.976 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 1123], 99.95th=[ 1500], 00:29:15.976 | 99.99th=[ 5800] 00:29:15.976 bw ( KiB/s): min=12790, max=13664, per=33.49%, avg=13193.83, stdev=302.45, samples=6 00:29:15.976 iops : min= 3197, max= 3416, avg=3298.33, stdev=75.76, samples=6 00:29:15.976 lat (usec) : 250=25.34%, 500=74.41%, 750=0.07%, 1000=0.05% 00:29:15.976 lat (msec) : 2=0.09%, 4=0.02%, 10=0.01%, 50=0.01% 00:29:15.976 cpu : usr=0.90%, sys=4.17%, ctx=12247, majf=0, minf=2 00:29:15.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.976 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.976 issued rwts: total=12229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:15.976 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107237: Wed Nov 6 13:58:56 2024 00:29:15.976 read: IOPS=2048, BW=8192KiB/s (8388kB/s)(24.6MiB/3070msec) 00:29:15.976 slat (usec): min=11, max=15591, avg=37.78, stdev=241.67 00:29:15.976 clat (usec): min=186, max=6340, avg=447.01, stdev=120.98 00:29:15.976 lat (usec): min=200, max=16166, avg=484.79, stdev=271.68 00:29:15.976 clat percentiles (usec): 00:29:15.976 | 1.00th=[ 273], 5.00th=[ 367], 10.00th=[ 388], 20.00th=[ 408], 00:29:15.976 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 453], 00:29:15.976 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 494], 95.00th=[ 519], 00:29:15.976 | 99.00th=[ 660], 99.50th=[ 832], 99.90th=[ 1909], 99.95th=[ 2245], 00:29:15.976 | 99.99th=[ 6325] 00:29:15.976 bw ( KiB/s): min= 8056, max= 8256, per=20.75%, avg=8175.80, stdev=85.95, samples=5 00:29:15.976 iops : min= 2014, max= 2064, avg=2043.80, stdev=21.34, samples=5 00:29:15.976 lat (usec) : 250=0.38%, 500=91.59%, 750=7.30%, 1000=0.43% 00:29:15.976 lat (msec) : 2=0.21%, 4=0.06%, 10=0.02% 00:29:15.976 cpu : usr=1.11%, sys=6.00%, ctx=6290, majf=0, minf=2 00:29:15.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.976 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.976 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:15.976 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107238: Wed Nov 6 13:58:56 2024 00:29:15.976 read: IOPS=3277, BW=12.8MiB/s (13.4MB/s)(36.5MiB/2851msec) 00:29:15.976 slat (usec): min=8, max=110, avg=15.93, stdev= 5.03 00:29:15.976 clat (usec): min=164, max=8031, avg=287.68, stdev=127.17 00:29:15.976 lat (usec): min=179, max=8040, avg=303.61, stdev=127.37 00:29:15.976 clat percentiles (usec): 00:29:15.976 | 1.00th=[ 204], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 265], 00:29:15.976 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:29:15.976 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 330], 00:29:15.976 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 873], 99.95th=[ 2966], 00:29:15.976 | 99.99th=[ 8029] 00:29:15.976 bw ( KiB/s): min=12992, max=13464, per=33.51%, avg=13201.00, stdev=176.21, samples=5 00:29:15.976 iops : min= 3248, max= 3366, avg=3300.20, stdev=44.04, samples=5 00:29:15.976 lat (usec) : 250=8.78%, 500=91.09%, 750=0.01%, 1000=0.01% 00:29:15.976 lat (msec) : 2=0.03%, 4=0.04%, 10=0.02% 00:29:15.976 cpu : usr=0.84%, sys=4.60%, ctx=9343, majf=0, minf=1 00:29:15.976 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.976 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.976 issued rwts: total=9343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.976 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:15.976 00:29:15.976 Run status group 0 (all jobs): 00:29:15.976 READ: bw=38.5MiB/s (40.3MB/s), 8192KiB/s-13.5MiB/s (8388kB/s-14.1MB/s), io=137MiB (143MB), run=2851-3550msec 00:29:15.976 00:29:15.976 Disk stats (read/write): 00:29:15.976 nvme0n1: ios=6529/0, merge=0/0, ticks=2909/0, in_queue=2909, util=95.22% 00:29:15.976 nvme0n2: ios=11126/0, merge=0/0, ticks=3214/0, in_queue=3214, util=95.41% 00:29:15.976 nvme0n3: ios=5932/0, merge=0/0, ticks=2704/0, in_queue=2704, util=96.60% 00:29:15.976 nvme0n4: ios=8613/0, merge=0/0, ticks=2507/0, in_queue=2507, util=96.21% 00:29:15.976 13:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:15.976 13:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:16.235 13:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:16.235 13:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:16.494 13:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:16.494 13:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:16.753 13:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:16.753 13:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:17.012 13:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:17.012 13:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 107194 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:17.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:17.271 nvmf hotplug test: fio failed as expected 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:17.271 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.530 rmmod nvme_tcp 00:29:17.530 rmmod nvme_fabrics 00:29:17.530 rmmod nvme_keyring 00:29:17.530 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 106704 ']' 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 106704 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 106704 ']' 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 106704 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 106704 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:17.790 killing process with pid 106704 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 106704' 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 106704 00:29:17.790 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 106704 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:18.048 13:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:29:18.308 00:29:18.308 real 0m20.169s 00:29:18.308 user 0m59.093s 00:29:18.308 sys 0m11.526s 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.308 ************************************ 00:29:18.308 END TEST nvmf_fio_target 00:29:18.308 ************************************ 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:18.308 ************************************ 00:29:18.308 START TEST nvmf_bdevio 00:29:18.308 ************************************ 00:29:18.308 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:18.567 * Looking for test storage... 00:29:18.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.568 --rc genhtml_branch_coverage=1 00:29:18.568 --rc genhtml_function_coverage=1 00:29:18.568 --rc genhtml_legend=1 00:29:18.568 --rc geninfo_all_blocks=1 00:29:18.568 --rc geninfo_unexecuted_blocks=1 00:29:18.568 00:29:18.568 ' 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.568 --rc genhtml_branch_coverage=1 00:29:18.568 --rc genhtml_function_coverage=1 00:29:18.568 --rc genhtml_legend=1 00:29:18.568 --rc geninfo_all_blocks=1 00:29:18.568 --rc geninfo_unexecuted_blocks=1 00:29:18.568 00:29:18.568 ' 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.568 --rc genhtml_branch_coverage=1 00:29:18.568 --rc genhtml_function_coverage=1 00:29:18.568 --rc genhtml_legend=1 00:29:18.568 --rc geninfo_all_blocks=1 00:29:18.568 --rc geninfo_unexecuted_blocks=1 00:29:18.568 00:29:18.568 ' 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.568 --rc genhtml_branch_coverage=1 00:29:18.568 --rc genhtml_function_coverage=1 00:29:18.568 --rc genhtml_legend=1 00:29:18.568 --rc geninfo_all_blocks=1 00:29:18.568 --rc geninfo_unexecuted_blocks=1 00:29:18.568 00:29:18.568 ' 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.568 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.569 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:18.828 Cannot find device "nvmf_init_br" 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:18.828 Cannot find device "nvmf_init_br2" 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:18.828 Cannot find device "nvmf_tgt_br" 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:18.828 Cannot find device "nvmf_tgt_br2" 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:18.828 Cannot find device "nvmf_init_br" 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:18.828 Cannot find device "nvmf_init_br2" 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:29:18.828 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:18.829 Cannot find device "nvmf_tgt_br" 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:18.829 Cannot find device "nvmf_tgt_br2" 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:18.829 Cannot find device "nvmf_br" 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:18.829 Cannot find device "nvmf_init_if" 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:18.829 Cannot find device "nvmf_init_if2" 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:18.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:18.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:18.829 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:19.088 13:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:19.088 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:19.088 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:19.088 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:19.088 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:19.088 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:19.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:19.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:29:19.347 00:29:19.347 --- 10.0.0.3 ping statistics --- 00:29:19.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.347 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:19.347 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:19.347 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:29:19.347 00:29:19.347 --- 10.0.0.4 ping statistics --- 00:29:19.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.347 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:19.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:29:19.347 00:29:19.347 --- 10.0.0.1 ping statistics --- 00:29:19.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.347 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:19.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:29:19.347 00:29:19.347 --- 10.0.0.2 ping statistics --- 00:29:19.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.347 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=107624 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 107624 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 107624 ']' 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:19.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:19.347 13:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:19.348 [2024-11-06 13:59:00.145642] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:19.348 [2024-11-06 13:59:00.146777] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:29:19.348 [2024-11-06 13:59:00.146847] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.607 [2024-11-06 13:59:00.300315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.607 [2024-11-06 13:59:00.383192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.607 [2024-11-06 13:59:00.383252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.607 [2024-11-06 13:59:00.383263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.607 [2024-11-06 13:59:00.383272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.607 [2024-11-06 13:59:00.383279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.607 [2024-11-06 13:59:00.384957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.607 [2024-11-06 13:59:00.385159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:19.607 [2024-11-06 13:59:00.385319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:19.607 [2024-11-06 13:59:00.385386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.607 [2024-11-06 13:59:00.518920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:19.607 [2024-11-06 13:59:00.519919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:19.607 [2024-11-06 13:59:00.520233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:19.607 [2024-11-06 13:59:00.520295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:19.607 [2024-11-06 13:59:00.520396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:20.175 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:20.175 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:29:20.175 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.175 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.175 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:20.434 [2024-11-06 13:59:01.122744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:20.434 Malloc0 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:20.434 [2024-11-06 13:59:01.218780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.434 { 00:29:20.434 "params": { 00:29:20.434 "name": "Nvme$subsystem", 00:29:20.434 "trtype": "$TEST_TRANSPORT", 00:29:20.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.434 "adrfam": "ipv4", 00:29:20.434 "trsvcid": "$NVMF_PORT", 00:29:20.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.434 "hdgst": ${hdgst:-false}, 00:29:20.434 "ddgst": ${ddgst:-false} 00:29:20.434 }, 00:29:20.434 "method": "bdev_nvme_attach_controller" 00:29:20.434 } 00:29:20.434 EOF 00:29:20.434 )") 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:29:20.434 13:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:20.434 "params": { 00:29:20.434 "name": "Nvme1", 00:29:20.434 "trtype": "tcp", 00:29:20.434 "traddr": "10.0.0.3", 00:29:20.434 "adrfam": "ipv4", 00:29:20.434 "trsvcid": "4420", 00:29:20.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:20.434 "hdgst": false, 00:29:20.434 "ddgst": false 00:29:20.434 }, 00:29:20.434 "method": "bdev_nvme_attach_controller" 00:29:20.434 }' 00:29:20.434 [2024-11-06 13:59:01.279218] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:29:20.435 [2024-11-06 13:59:01.279303] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107678 ] 00:29:20.694 [2024-11-06 13:59:01.458834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:20.694 [2024-11-06 13:59:01.569070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.694 [2024-11-06 13:59:01.569257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.694 [2024-11-06 13:59:01.569260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.952 I/O targets: 00:29:20.952 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:20.952 00:29:20.952 00:29:20.952 CUnit - A unit testing framework for C - Version 2.1-3 00:29:20.952 http://cunit.sourceforge.net/ 00:29:20.952 00:29:20.952 00:29:20.952 Suite: bdevio tests on: Nvme1n1 00:29:20.952 Test: blockdev write read block ...passed 00:29:20.952 Test: blockdev write zeroes read block ...passed 00:29:20.952 Test: blockdev write zeroes read no split ...passed 00:29:21.210 Test: blockdev write zeroes read split ...passed 00:29:21.210 Test: blockdev write zeroes read split partial ...passed 00:29:21.210 Test: blockdev reset ...[2024-11-06 13:59:01.947895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:21.211 [2024-11-06 13:59:01.948004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a46f50 (9): Bad file descriptor 00:29:21.211 [2024-11-06 13:59:01.951027] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:29:21.211 passed 00:29:21.211 Test: blockdev write read 8 blocks ...passed 00:29:21.211 Test: blockdev write read size > 128k ...passed 00:29:21.211 Test: blockdev write read invalid size ...passed 00:29:21.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:21.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:21.211 Test: blockdev write read max offset ...passed 00:29:21.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:21.211 Test: blockdev writev readv 8 blocks ...passed 00:29:21.211 Test: blockdev writev readv 30 x 1block ...passed 00:29:21.211 Test: blockdev writev readv block ...passed 00:29:21.211 Test: blockdev writev readv size > 128k ...passed 00:29:21.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:21.211 Test: blockdev comparev and writev ...[2024-11-06 13:59:02.123641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:21.211 [2024-11-06 13:59:02.123700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.211 [2024-11-06 13:59:02.123717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:21.211 [2024-11-06 13:59:02.123727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:21.211 [2024-11-06 13:59:02.124028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:21.211 [2024-11-06 13:59:02.124041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:21.211 [2024-11-06 13:59:02.124054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:21.211 [2024-11-06 13:59:02.124064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:21.211 [2024-11-06 13:59:02.124339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:21.211 [2024-11-06 13:59:02.124350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:21.211 [2024-11-06 13:59:02.124364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:21.211 [2024-11-06 13:59:02.124374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:21.211 [2024-11-06 13:59:02.124650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:21.211 [2024-11-06 13:59:02.124661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:21.211 [2024-11-06 13:59:02.124675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:21.211 [2024-11-06 13:59:02.124684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:21.470 passed 00:29:21.470 Test: blockdev nvme passthru rw ...passed 00:29:21.470 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:59:02.207185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:21.470 [2024-11-06 13:59:02.207210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:21.470 [2024-11-06 13:59:02.207308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:21.470 [2024-11-06 13:59:02.207319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:21.470 [2024-11-06 13:59:02.207407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:21.470 [2024-11-06 13:59:02.207418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:21.470 passed 00:29:21.470 Test: blockdev nvme admin passthru ...[2024-11-06 13:59:02.207506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:21.470 [2024-11-06 13:59:02.207517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:21.470 passed 00:29:21.470 Test: blockdev copy ...passed 00:29:21.470 00:29:21.470 Run Summary: Type Total Ran Passed Failed Inactive 00:29:21.470 suites 1 1 n/a 0 0 00:29:21.470 tests 23 23 23 0 0 00:29:21.470 asserts 152 152 152 0 n/a 00:29:21.470 00:29:21.470 Elapsed time = 0.944 seconds 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.730 rmmod nvme_tcp 00:29:21.730 rmmod nvme_fabrics 00:29:21.730 rmmod nvme_keyring 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 107624 ']' 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 107624 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 107624 ']' 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 107624 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:21.730 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 107624 00:29:21.989 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:29:21.989 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:29:21.989 killing process with pid 107624 00:29:21.989 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 107624' 00:29:21.989 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 107624 00:29:21.989 13:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 107624 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:22.249 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:29:22.508 00:29:22.508 real 0m4.144s 00:29:22.508 user 0m8.468s 00:29:22.508 sys 0m1.913s 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:22.508 ************************************ 00:29:22.508 END TEST nvmf_bdevio 00:29:22.508 ************************************ 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:22.508 00:29:22.508 real 3m39.426s 00:29:22.508 user 9m12.912s 00:29:22.508 sys 1m33.570s 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:22.508 13:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:22.508 ************************************ 00:29:22.508 END TEST nvmf_target_core_interrupt_mode 00:29:22.508 ************************************ 00:29:22.767 13:59:03 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:22.767 13:59:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:22.767 13:59:03 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:22.767 13:59:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:22.767 ************************************ 00:29:22.767 START TEST nvmf_interrupt 00:29:22.767 ************************************ 00:29:22.767 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:22.767 * Looking for test storage... 00:29:22.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:22.767 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:22.767 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:29:22.767 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:23.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.029 --rc genhtml_branch_coverage=1 00:29:23.029 --rc genhtml_function_coverage=1 00:29:23.029 --rc genhtml_legend=1 00:29:23.029 --rc geninfo_all_blocks=1 00:29:23.029 --rc geninfo_unexecuted_blocks=1 00:29:23.029 00:29:23.029 ' 00:29:23.029 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:23.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.029 --rc genhtml_branch_coverage=1 00:29:23.030 --rc genhtml_function_coverage=1 00:29:23.030 --rc genhtml_legend=1 00:29:23.030 --rc geninfo_all_blocks=1 00:29:23.030 --rc geninfo_unexecuted_blocks=1 00:29:23.030 00:29:23.030 ' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:23.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.030 --rc genhtml_branch_coverage=1 00:29:23.030 --rc genhtml_function_coverage=1 00:29:23.030 --rc genhtml_legend=1 00:29:23.030 --rc geninfo_all_blocks=1 00:29:23.030 --rc geninfo_unexecuted_blocks=1 00:29:23.030 00:29:23.030 ' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:23.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.030 --rc genhtml_branch_coverage=1 00:29:23.030 --rc genhtml_function_coverage=1 00:29:23.030 --rc genhtml_legend=1 00:29:23.030 --rc geninfo_all_blocks=1 00:29:23.030 --rc geninfo_unexecuted_blocks=1 00:29:23.030 00:29:23.030 ' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:23.030 Cannot find device "nvmf_init_br" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:23.030 Cannot find device "nvmf_init_br2" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:23.030 Cannot find device "nvmf_tgt_br" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:23.030 Cannot find device "nvmf_tgt_br2" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:23.030 Cannot find device "nvmf_init_br" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:23.030 Cannot find device "nvmf_init_br2" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:23.030 Cannot find device "nvmf_tgt_br" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:23.030 Cannot find device "nvmf_tgt_br2" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:23.030 Cannot find device "nvmf_br" 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:29:23.030 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:23.293 Cannot find device "nvmf_init_if" 00:29:23.293 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:29:23.293 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:23.293 Cannot find device "nvmf_init_if2" 00:29:23.293 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:29:23.293 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:23.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:23.293 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:29:23.293 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:23.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:23.293 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:29:23.293 13:59:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:23.293 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:23.553 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:23.553 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:23.553 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:23.553 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:23.553 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:23.553 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:23.553 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:23.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:23.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:29:23.554 00:29:23.554 --- 10.0.0.3 ping statistics --- 00:29:23.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.554 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:23.554 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:23.554 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:29:23.554 00:29:23.554 --- 10.0.0.4 ping statistics --- 00:29:23.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.554 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:23.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:29:23.554 00:29:23.554 --- 10.0.0.1 ping statistics --- 00:29:23.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.554 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:23.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:29:23.554 00:29:23.554 --- 10.0.0.2 ping statistics --- 00:29:23.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.554 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=107935 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 107935 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 107935 ']' 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:23.554 13:59:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:23.554 [2024-11-06 13:59:04.443440] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:23.554 [2024-11-06 13:59:04.444403] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:29:23.554 [2024-11-06 13:59:04.445178] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.813 [2024-11-06 13:59:04.617636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:23.813 [2024-11-06 13:59:04.696513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.813 [2024-11-06 13:59:04.696580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.813 [2024-11-06 13:59:04.696591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.813 [2024-11-06 13:59:04.696599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.813 [2024-11-06 13:59:04.696607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.813 [2024-11-06 13:59:04.698007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.813 [2024-11-06 13:59:04.698007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.072 [2024-11-06 13:59:04.830781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.072 [2024-11-06 13:59:04.831259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:24.072 [2024-11-06 13:59:04.831573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:29:24.641 5000+0 records in 00:29:24.641 5000+0 records out 00:29:24.641 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0395437 s, 259 MB/s 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:24.641 AIO0 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:24.641 [2024-11-06 13:59:05.531119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.641 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:24.901 [2024-11-06 13:59:05.575547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107935 0 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107935 0 idle 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107935 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107935 root 20 0 64.2g 44928 32512 S 0.0 0.4 0:00.35 reactor_0' 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107935 root 20 0 64.2g 44928 32512 S 0.0 0.4 0:00.35 reactor_0 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107935 1 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107935 1 idle 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107935 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:24.901 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107939 root 20 0 64.2g 44928 32512 S 0.0 0.4 0:00.00 reactor_1' 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107939 root 20 0 64.2g 44928 32512 S 0.0 0.4 0:00.00 reactor_1 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=108009 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107935 0 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107935 0 busy 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107935 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:25.161 13:59:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107935 root 20 0 64.2g 44928 32512 S 0.0 0.4 0:00.35 reactor_0' 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107935 root 20 0 64.2g 44928 32512 S 0.0 0.4 0:00.35 reactor_0 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:25.421 13:59:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:29:26.358 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:29:26.358 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:26.358 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:26.358 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107935 root 20 0 64.2g 46208 32896 R 99.9 0.4 0:01.74 reactor_0' 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107935 root 20 0 64.2g 46208 32896 R 99.9 0.4 0:01.74 reactor_0 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107935 1 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107935 1 busy 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107935 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107939 root 20 0 64.2g 46208 32896 R 68.8 0.4 0:00.82 reactor_1' 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107939 root 20 0 64.2g 46208 32896 R 68.8 0.4 0:00.82 reactor_1 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=68.8 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=68 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:26.617 13:59:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 108009 00:29:36.603 Initializing NVMe Controllers 00:29:36.603 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.603 Controller IO queue size 256, less than required. 00:29:36.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.603 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:36.603 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:36.603 Initialization complete. Launching workers. 00:29:36.603 ======================================================== 00:29:36.603 Latency(us) 00:29:36.603 Device Information : IOPS MiB/s Average min max 00:29:36.603 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 8523.80 33.30 30065.16 7538.85 52015.15 00:29:36.603 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 8488.20 33.16 30183.75 8167.49 50626.41 00:29:36.603 ======================================================== 00:29:36.603 Total : 17011.99 66.45 30124.33 7538.85 52015.15 00:29:36.603 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107935 0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107935 0 idle 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107935 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107935 root 20 0 64.2g 46208 32896 S 0.0 0.4 0:13.10 reactor_0' 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107935 root 20 0 64.2g 46208 32896 S 0.0 0.4 0:13.10 reactor_0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107935 1 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107935 1 idle 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107935 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107939 root 20 0 64.2g 46208 32896 S 0.0 0.4 0:06.37 reactor_1' 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107939 root 20 0 64.2g 46208 32896 S 0.0 0.4 0:06.37 reactor_1 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:36.603 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:36.604 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:36.604 13:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:36.604 13:59:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:29:36.604 13:59:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:29:36.604 13:59:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:29:36.604 13:59:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:29:36.604 13:59:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:29:36.604 13:59:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107935 0 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107935 0 idle 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107935 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107935 root 20 0 64.2g 48256 32896 S 0.0 0.4 0:13.16 reactor_0' 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107935 root 20 0 64.2g 48256 32896 S 0.0 0.4 0:13.16 reactor_0 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107935 1 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107935 1 idle 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107935 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107935 -w 256 00:29:37.983 13:59:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107939 root 20 0 64.2g 48256 32896 S 0.0 0.4 0:06.40 reactor_1' 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107939 root 20 0 64.2g 48256 32896 S 0.0 0.4 0:06.40 reactor_1 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:38.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:38.242 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:38.502 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:29:38.502 13:59:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:29:38.502 13:59:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:29:38.502 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.502 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:29:38.761 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.761 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:29:38.761 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.761 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.761 rmmod nvme_tcp 00:29:38.761 rmmod nvme_fabrics 00:29:38.761 rmmod nvme_keyring 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 107935 ']' 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 107935 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 107935 ']' 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 107935 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 107935 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:39.023 killing process with pid 107935 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 107935' 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 107935 00:29:39.023 13:59:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 107935 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:39.281 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:29:39.552 00:29:39.552 real 0m16.907s 00:29:39.552 user 0m27.889s 00:29:39.552 sys 0m7.854s 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:39.552 13:59:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:39.552 ************************************ 00:29:39.552 END TEST nvmf_interrupt 00:29:39.552 ************************************ 00:29:39.552 00:29:39.552 real 20m34.532s 00:29:39.552 user 51m29.752s 00:29:39.552 sys 6m21.628s 00:29:39.552 13:59:20 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:39.552 13:59:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.552 ************************************ 00:29:39.552 END TEST nvmf_tcp 00:29:39.552 ************************************ 00:29:39.811 13:59:20 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:29:39.811 13:59:20 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:39.811 13:59:20 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:39.811 13:59:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:39.811 13:59:20 -- common/autotest_common.sh@10 -- # set +x 00:29:39.811 ************************************ 00:29:39.811 START TEST spdkcli_nvmf_tcp 00:29:39.811 ************************************ 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:39.811 * Looking for test storage... 00:29:39.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.811 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:40.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.072 --rc genhtml_branch_coverage=1 00:29:40.072 --rc genhtml_function_coverage=1 00:29:40.072 --rc genhtml_legend=1 00:29:40.072 --rc geninfo_all_blocks=1 00:29:40.072 --rc geninfo_unexecuted_blocks=1 00:29:40.072 00:29:40.072 ' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:40.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.072 --rc genhtml_branch_coverage=1 00:29:40.072 --rc genhtml_function_coverage=1 00:29:40.072 --rc genhtml_legend=1 00:29:40.072 --rc geninfo_all_blocks=1 00:29:40.072 --rc geninfo_unexecuted_blocks=1 00:29:40.072 00:29:40.072 ' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:40.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.072 --rc genhtml_branch_coverage=1 00:29:40.072 --rc genhtml_function_coverage=1 00:29:40.072 --rc genhtml_legend=1 00:29:40.072 --rc geninfo_all_blocks=1 00:29:40.072 --rc geninfo_unexecuted_blocks=1 00:29:40.072 00:29:40.072 ' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:40.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.072 --rc genhtml_branch_coverage=1 00:29:40.072 --rc genhtml_function_coverage=1 00:29:40.072 --rc genhtml_legend=1 00:29:40.072 --rc geninfo_all_blocks=1 00:29:40.072 --rc geninfo_unexecuted_blocks=1 00:29:40.072 00:29:40.072 ' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:40.072 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.072 13:59:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108349 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 108349 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 108349 ']' 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.073 13:59:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.073 [2024-11-06 13:59:20.864328] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:29:40.073 [2024-11-06 13:59:20.864424] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108349 ] 00:29:40.331 [2024-11-06 13:59:21.014456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:40.331 [2024-11-06 13:59:21.095928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.331 [2024-11-06 13:59:21.095931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.897 13:59:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.155 13:59:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:41.155 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:41.155 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:41.155 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:41.155 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:41.155 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:41.155 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:41.155 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:41.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:41.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:41.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:41.155 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:41.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:41.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:41.155 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:41.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:41.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:41.156 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:41.156 ' 00:29:43.688 [2024-11-06 13:59:24.616573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.062 [2024-11-06 13:59:25.967517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:47.679 [2024-11-06 13:59:28.473309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:50.213 [2024-11-06 13:59:30.655608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:51.590 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:51.590 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:51.590 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:51.590 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:51.590 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:51.590 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:51.590 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:51.590 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:51.590 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:51.590 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:51.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:51.590 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:51.590 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:51.590 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:51.590 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.590 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:51.590 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:51.590 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.590 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:51.590 13:59:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:52.157 13:59:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:52.157 13:59:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:52.157 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:52.158 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.158 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.158 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:52.158 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.158 13:59:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.158 13:59:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:52.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:52.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:52.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:52.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:52.158 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:52.158 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:52.158 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:52.158 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:52.158 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:52.158 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:52.158 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:52.158 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:52.158 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:52.158 ' 00:29:58.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:58.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:58.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:58.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:58.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:58.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:58.719 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:58.719 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:58.719 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:58.719 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:58.719 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:58.719 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:58.720 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:58.720 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 108349 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 108349 ']' 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 108349 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 108349 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 108349' 00:29:58.720 killing process with pid 108349 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 108349 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 108349 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 108349 ']' 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 108349 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 108349 ']' 00:29:58.720 Process with pid 108349 is not found 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 108349 00:29:58.720 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (108349) - No such process 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 108349 is not found' 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:58.720 13:59:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:58.720 00:29:58.720 real 0m18.470s 00:29:58.720 user 0m40.376s 00:29:58.720 sys 0m1.191s 00:29:58.720 ************************************ 00:29:58.720 END TEST spdkcli_nvmf_tcp 00:29:58.720 ************************************ 00:29:58.720 13:59:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:58.720 13:59:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.720 13:59:39 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:58.720 13:59:39 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:58.720 13:59:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:58.720 13:59:39 -- common/autotest_common.sh@10 -- # set +x 00:29:58.720 ************************************ 00:29:58.720 START TEST nvmf_identify_passthru 00:29:58.720 ************************************ 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:58.720 * Looking for test storage... 00:29:58.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:58.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.720 --rc genhtml_branch_coverage=1 00:29:58.720 --rc genhtml_function_coverage=1 00:29:58.720 --rc genhtml_legend=1 00:29:58.720 --rc geninfo_all_blocks=1 00:29:58.720 --rc geninfo_unexecuted_blocks=1 00:29:58.720 00:29:58.720 ' 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:58.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.720 --rc genhtml_branch_coverage=1 00:29:58.720 --rc genhtml_function_coverage=1 00:29:58.720 --rc genhtml_legend=1 00:29:58.720 --rc geninfo_all_blocks=1 00:29:58.720 --rc geninfo_unexecuted_blocks=1 00:29:58.720 00:29:58.720 ' 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:58.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.720 --rc genhtml_branch_coverage=1 00:29:58.720 --rc genhtml_function_coverage=1 00:29:58.720 --rc genhtml_legend=1 00:29:58.720 --rc geninfo_all_blocks=1 00:29:58.720 --rc geninfo_unexecuted_blocks=1 00:29:58.720 00:29:58.720 ' 00:29:58.720 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:58.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.720 --rc genhtml_branch_coverage=1 00:29:58.720 --rc genhtml_function_coverage=1 00:29:58.720 --rc genhtml_legend=1 00:29:58.720 --rc geninfo_all_blocks=1 00:29:58.720 --rc geninfo_unexecuted_blocks=1 00:29:58.720 00:29:58.720 ' 00:29:58.720 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.720 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.720 13:59:39 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.720 13:59:39 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.720 13:59:39 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.721 13:59:39 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.721 13:59:39 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:58.721 13:59:39 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.721 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.721 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:58.721 13:59:39 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.721 13:59:39 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.721 13:59:39 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.721 13:59:39 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.721 13:59:39 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.721 13:59:39 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.721 13:59:39 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.721 13:59:39 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:58.721 13:59:39 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.721 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.721 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:58.721 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:58.721 Cannot find device "nvmf_init_br" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:58.721 Cannot find device "nvmf_init_br2" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:58.721 Cannot find device "nvmf_tgt_br" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:58.721 Cannot find device "nvmf_tgt_br2" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:58.721 Cannot find device "nvmf_init_br" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:58.721 Cannot find device "nvmf_init_br2" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:58.721 Cannot find device "nvmf_tgt_br" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:58.721 Cannot find device "nvmf_tgt_br2" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:58.721 Cannot find device "nvmf_br" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:58.721 Cannot find device "nvmf_init_if" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:58.721 Cannot find device "nvmf_init_if2" 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:58.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:58.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:58.721 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:58.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:58.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:29:58.980 00:29:58.980 --- 10.0.0.3 ping statistics --- 00:29:58.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.980 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:58.980 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:58.980 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.167 ms 00:29:58.980 00:29:58.980 --- 10.0.0.4 ping statistics --- 00:29:58.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.980 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:58.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:29:58.980 00:29:58.980 --- 10.0.0.1 ping statistics --- 00:29:58.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.980 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:58.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:29:58.980 00:29:58.980 --- 10.0.0.2 ping statistics --- 00:29:58.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.980 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.980 13:59:39 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.980 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.980 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:29:58.980 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:59.239 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:29:59.239 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:59.239 13:59:39 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:29:59.239 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:59.239 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:59.239 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:59.239 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:59.239 13:59:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:59.239 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:59.239 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:59.239 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:59.239 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:59.526 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:59.526 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.526 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.526 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=108879 00:29:59.526 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:59.526 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:59.526 13:59:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 108879 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 108879 ']' 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:59.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:59.526 13:59:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.807 [2024-11-06 13:59:40.501217] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:29:59.807 [2024-11-06 13:59:40.501722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.807 [2024-11-06 13:59:40.655031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.807 [2024-11-06 13:59:40.738531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.807 [2024-11-06 13:59:40.738800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.807 [2024-11-06 13:59:40.739003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.807 [2024-11-06 13:59:40.739015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.807 [2024-11-06 13:59:40.739023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.807 [2024-11-06 13:59:40.740409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.065 [2024-11-06 13:59:40.740683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.065 [2024-11-06 13:59:40.740580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.065 [2024-11-06 13:59:40.740688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.630 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:00.630 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:30:00.630 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:00.630 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.630 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.630 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.630 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:00.630 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.630 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.888 [2024-11-06 13:59:41.582654] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.888 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.888 [2024-11-06 13:59:41.597083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.888 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.888 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.888 Nvme0n1 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.888 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.888 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.888 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.889 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:00.889 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.889 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.889 [2024-11-06 13:59:41.781031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:00.889 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.889 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:00.889 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.889 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.889 [ 00:30:00.889 { 00:30:00.889 "allow_any_host": true, 00:30:00.889 "hosts": [], 00:30:00.889 "listen_addresses": [], 00:30:00.889 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:00.889 "subtype": "Discovery" 00:30:00.889 }, 00:30:00.889 { 00:30:00.889 "allow_any_host": true, 00:30:00.889 "hosts": [], 00:30:00.889 "listen_addresses": [ 00:30:00.889 { 00:30:00.889 "adrfam": "IPv4", 00:30:00.889 "traddr": "10.0.0.3", 00:30:00.889 "trsvcid": "4420", 00:30:00.889 "trtype": "TCP" 00:30:00.889 } 00:30:00.889 ], 00:30:00.889 "max_cntlid": 65519, 00:30:00.889 "max_namespaces": 1, 00:30:00.889 "min_cntlid": 1, 00:30:00.889 "model_number": "SPDK bdev Controller", 00:30:00.889 "namespaces": [ 00:30:00.889 { 00:30:00.889 "bdev_name": "Nvme0n1", 00:30:00.889 "name": "Nvme0n1", 00:30:00.889 "nguid": "7F7E458FDD3840A0A0B2B6B0031A8201", 00:30:00.889 "nsid": 1, 00:30:00.889 "uuid": "7f7e458f-dd38-40a0-a0b2-b6b0031a8201" 00:30:00.889 } 00:30:00.889 ], 00:30:00.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.889 "serial_number": "SPDK00000000000001", 00:30:00.889 "subtype": "NVMe" 00:30:00.889 } 00:30:00.889 ] 00:30:00.889 13:59:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.147 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:01.147 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:01.147 13:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:01.147 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:30:01.147 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:01.147 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:01.147 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:01.406 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:30:01.406 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:30:01.406 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:30:01.406 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.406 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.406 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:01.406 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.406 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:01.406 13:59:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:01.406 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.406 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.664 rmmod nvme_tcp 00:30:01.664 rmmod nvme_fabrics 00:30:01.664 rmmod nvme_keyring 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 108879 ']' 00:30:01.664 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 108879 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 108879 ']' 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 108879 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 108879 00:30:01.664 killing process with pid 108879 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 108879' 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 108879 00:30:01.664 13:59:42 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 108879 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:01.922 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:02.181 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:02.181 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:02.181 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:02.181 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:02.181 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:02.181 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:02.181 13:59:42 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:02.181 13:59:43 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:02.181 13:59:43 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:02.181 13:59:43 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.181 13:59:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:02.181 13:59:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.181 13:59:43 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:30:02.181 00:30:02.181 real 0m3.999s 00:30:02.181 user 0m8.603s 00:30:02.181 sys 0m1.332s 00:30:02.181 13:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:02.181 ************************************ 00:30:02.181 END TEST nvmf_identify_passthru 00:30:02.181 ************************************ 00:30:02.181 13:59:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.439 13:59:43 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:02.439 13:59:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:02.439 13:59:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:02.439 13:59:43 -- common/autotest_common.sh@10 -- # set +x 00:30:02.439 ************************************ 00:30:02.439 START TEST nvmf_dif 00:30:02.439 ************************************ 00:30:02.439 13:59:43 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:02.439 * Looking for test storage... 00:30:02.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:02.439 13:59:43 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:02.439 13:59:43 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:30:02.439 13:59:43 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:02.698 13:59:43 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:30:02.698 13:59:43 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:30:02.699 13:59:43 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.699 13:59:43 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:02.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.699 --rc genhtml_branch_coverage=1 00:30:02.699 --rc genhtml_function_coverage=1 00:30:02.699 --rc genhtml_legend=1 00:30:02.699 --rc geninfo_all_blocks=1 00:30:02.699 --rc geninfo_unexecuted_blocks=1 00:30:02.699 00:30:02.699 ' 00:30:02.699 13:59:43 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:02.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.699 --rc genhtml_branch_coverage=1 00:30:02.699 --rc genhtml_function_coverage=1 00:30:02.699 --rc genhtml_legend=1 00:30:02.699 --rc geninfo_all_blocks=1 00:30:02.699 --rc geninfo_unexecuted_blocks=1 00:30:02.699 00:30:02.699 ' 00:30:02.699 13:59:43 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:02.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.699 --rc genhtml_branch_coverage=1 00:30:02.699 --rc genhtml_function_coverage=1 00:30:02.699 --rc genhtml_legend=1 00:30:02.699 --rc geninfo_all_blocks=1 00:30:02.699 --rc geninfo_unexecuted_blocks=1 00:30:02.699 00:30:02.699 ' 00:30:02.699 13:59:43 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:02.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.699 --rc genhtml_branch_coverage=1 00:30:02.699 --rc genhtml_function_coverage=1 00:30:02.699 --rc genhtml_legend=1 00:30:02.699 --rc geninfo_all_blocks=1 00:30:02.699 --rc geninfo_unexecuted_blocks=1 00:30:02.699 00:30:02.699 ' 00:30:02.699 13:59:43 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.699 13:59:43 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.699 13:59:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.699 13:59:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.699 13:59:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.699 13:59:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:02.699 13:59:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:02.699 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.699 13:59:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:02.699 13:59:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:02.699 13:59:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:02.699 13:59:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:02.699 13:59:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.699 13:59:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:02.699 13:59:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:02.699 Cannot find device "nvmf_init_br" 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@162 -- # true 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:02.699 Cannot find device "nvmf_init_br2" 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@163 -- # true 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:02.699 Cannot find device "nvmf_tgt_br" 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@164 -- # true 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:02.699 Cannot find device "nvmf_tgt_br2" 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@165 -- # true 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:02.699 Cannot find device "nvmf_init_br" 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@166 -- # true 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:02.699 Cannot find device "nvmf_init_br2" 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@167 -- # true 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:02.699 Cannot find device "nvmf_tgt_br" 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@168 -- # true 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:02.699 Cannot find device "nvmf_tgt_br2" 00:30:02.699 13:59:43 nvmf_dif -- nvmf/common.sh@169 -- # true 00:30:02.700 13:59:43 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:02.700 Cannot find device "nvmf_br" 00:30:02.700 13:59:43 nvmf_dif -- nvmf/common.sh@170 -- # true 00:30:02.700 13:59:43 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:02.958 Cannot find device "nvmf_init_if" 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@171 -- # true 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:02.958 Cannot find device "nvmf_init_if2" 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@172 -- # true 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:02.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@173 -- # true 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:02.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@174 -- # true 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:02.958 13:59:43 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:02.959 13:59:43 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:02.959 13:59:43 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:02.959 13:59:43 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:02.959 13:59:43 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:02.959 13:59:43 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:02.959 13:59:43 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:02.959 13:59:43 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:02.959 13:59:43 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:03.216 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:03.216 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.141 ms 00:30:03.216 00:30:03.216 --- 10.0.0.3 ping statistics --- 00:30:03.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.216 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:30:03.216 13:59:43 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:03.216 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:03.216 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:30:03.216 00:30:03.217 --- 10.0.0.4 ping statistics --- 00:30:03.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.217 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:30:03.217 13:59:43 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:03.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:30:03.217 00:30:03.217 --- 10.0.0.1 ping statistics --- 00:30:03.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.217 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:30:03.217 13:59:44 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:03.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:30:03.217 00:30:03.217 --- 10.0.0.2 ping statistics --- 00:30:03.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.217 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:30:03.217 13:59:44 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.217 13:59:44 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:30:03.217 13:59:44 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:03.217 13:59:44 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:03.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:03.782 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:03.782 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.782 13:59:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:03.782 13:59:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.782 13:59:44 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:03.782 13:59:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=109282 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:03.782 13:59:44 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 109282 00:30:03.782 13:59:44 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 109282 ']' 00:30:03.782 13:59:44 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.782 13:59:44 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:03.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.782 13:59:44 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.782 13:59:44 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:03.782 13:59:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:03.782 [2024-11-06 13:59:44.698897] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:30:03.782 [2024-11-06 13:59:44.698997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.040 [2024-11-06 13:59:44.855602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.040 [2024-11-06 13:59:44.935957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.040 [2024-11-06 13:59:44.936024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.040 [2024-11-06 13:59:44.936035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.040 [2024-11-06 13:59:44.936043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.040 [2024-11-06 13:59:44.936050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.040 [2024-11-06 13:59:44.936457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:30:04.975 13:59:45 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 13:59:45 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.975 13:59:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:04.975 13:59:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 [2024-11-06 13:59:45.683755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.975 13:59:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:04.975 13:59:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 ************************************ 00:30:04.975 START TEST fio_dif_1_default 00:30:04.975 ************************************ 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 bdev_null0 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 [2024-11-06 13:59:45.747837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.975 { 00:30:04.975 "params": { 00:30:04.975 "name": "Nvme$subsystem", 00:30:04.975 "trtype": "$TEST_TRANSPORT", 00:30:04.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.975 "adrfam": "ipv4", 00:30:04.975 "trsvcid": "$NVMF_PORT", 00:30:04.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.975 "hdgst": ${hdgst:-false}, 00:30:04.975 "ddgst": ${ddgst:-false} 00:30:04.975 }, 00:30:04.975 "method": "bdev_nvme_attach_controller" 00:30:04.975 } 00:30:04.975 EOF 00:30:04.975 )") 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:04.975 "params": { 00:30:04.975 "name": "Nvme0", 00:30:04.975 "trtype": "tcp", 00:30:04.975 "traddr": "10.0.0.3", 00:30:04.975 "adrfam": "ipv4", 00:30:04.975 "trsvcid": "4420", 00:30:04.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.975 "hdgst": false, 00:30:04.975 "ddgst": false 00:30:04.975 }, 00:30:04.975 "method": "bdev_nvme_attach_controller" 00:30:04.975 }' 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:04.975 13:59:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:05.244 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:05.244 fio-3.35 00:30:05.244 Starting 1 thread 00:30:17.494 00:30:17.494 filename0: (groupid=0, jobs=1): err= 0: pid=109372: Wed Nov 6 13:59:56 2024 00:30:17.494 read: IOPS=650, BW=2600KiB/s (2663kB/s)(25.4MiB/10017msec) 00:30:17.494 slat (nsec): min=4118, max=63354, avg=6373.58, stdev=2462.38 00:30:17.494 clat (usec): min=331, max=41642, avg=6134.29, stdev=14145.97 00:30:17.494 lat (usec): min=337, max=41648, avg=6140.67, stdev=14145.91 00:30:17.494 clat percentiles (usec): 00:30:17.494 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 355], 00:30:17.494 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:30:17.494 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[40633], 95.00th=[40633], 00:30:17.494 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:30:17.494 | 99.99th=[41681] 00:30:17.494 bw ( KiB/s): min= 960, max= 7776, per=100.00%, avg=2603.05, stdev=2037.87, samples=20 00:30:17.494 iops : min= 240, max= 1944, avg=650.75, stdev=509.48, samples=20 00:30:17.494 lat (usec) : 500=85.47%, 750=0.21% 00:30:17.494 lat (msec) : 4=0.06%, 50=14.25% 00:30:17.494 cpu : usr=82.98%, sys=16.41%, ctx=31, majf=0, minf=9 00:30:17.494 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.494 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.494 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:17.494 00:30:17.494 Run status group 0 (all jobs): 00:30:17.494 READ: bw=2600KiB/s (2663kB/s), 2600KiB/s-2600KiB/s (2663kB/s-2663kB/s), io=25.4MiB (26.7MB), run=10017-10017msec 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.494 00:30:17.494 real 0m11.255s 00:30:17.494 user 0m9.137s 00:30:17.494 sys 0m2.046s 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:17.494 13:59:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:17.494 ************************************ 00:30:17.494 END TEST fio_dif_1_default 00:30:17.494 ************************************ 00:30:17.494 13:59:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:17.494 13:59:57 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:17.494 13:59:57 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:17.494 13:59:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:17.494 ************************************ 00:30:17.494 START TEST fio_dif_1_multi_subsystems 00:30:17.494 ************************************ 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:17.495 bdev_null0 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:17.495 [2024-11-06 13:59:57.080189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:17.495 bdev_null1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.495 { 00:30:17.495 "params": { 00:30:17.495 "name": "Nvme$subsystem", 00:30:17.495 "trtype": "$TEST_TRANSPORT", 00:30:17.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.495 "adrfam": "ipv4", 00:30:17.495 "trsvcid": "$NVMF_PORT", 00:30:17.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.495 "hdgst": ${hdgst:-false}, 00:30:17.495 "ddgst": ${ddgst:-false} 00:30:17.495 }, 00:30:17.495 "method": "bdev_nvme_attach_controller" 00:30:17.495 } 00:30:17.495 EOF 00:30:17.495 )") 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.495 { 00:30:17.495 "params": { 00:30:17.495 "name": "Nvme$subsystem", 00:30:17.495 "trtype": "$TEST_TRANSPORT", 00:30:17.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.495 "adrfam": "ipv4", 00:30:17.495 "trsvcid": "$NVMF_PORT", 00:30:17.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.495 "hdgst": ${hdgst:-false}, 00:30:17.495 "ddgst": ${ddgst:-false} 00:30:17.495 }, 00:30:17.495 "method": "bdev_nvme_attach_controller" 00:30:17.495 } 00:30:17.495 EOF 00:30:17.495 )") 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:17.495 "params": { 00:30:17.495 "name": "Nvme0", 00:30:17.495 "trtype": "tcp", 00:30:17.495 "traddr": "10.0.0.3", 00:30:17.495 "adrfam": "ipv4", 00:30:17.495 "trsvcid": "4420", 00:30:17.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:17.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:17.495 "hdgst": false, 00:30:17.495 "ddgst": false 00:30:17.495 }, 00:30:17.495 "method": "bdev_nvme_attach_controller" 00:30:17.495 },{ 00:30:17.495 "params": { 00:30:17.495 "name": "Nvme1", 00:30:17.495 "trtype": "tcp", 00:30:17.495 "traddr": "10.0.0.3", 00:30:17.495 "adrfam": "ipv4", 00:30:17.495 "trsvcid": "4420", 00:30:17.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:17.495 "hdgst": false, 00:30:17.495 "ddgst": false 00:30:17.495 }, 00:30:17.495 "method": "bdev_nvme_attach_controller" 00:30:17.495 }' 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:17.495 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:17.496 13:59:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.496 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:17.496 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:17.496 fio-3.35 00:30:17.496 Starting 2 threads 00:30:27.485 00:30:27.485 filename0: (groupid=0, jobs=1): err= 0: pid=109532: Wed Nov 6 14:00:08 2024 00:30:27.485 read: IOPS=184, BW=738KiB/s (756kB/s)(7392KiB/10017msec) 00:30:27.485 slat (nsec): min=4150, max=66676, avg=10046.05, stdev=8529.79 00:30:27.485 clat (usec): min=330, max=42437, avg=21646.02, stdev=20219.78 00:30:27.486 lat (usec): min=336, max=42443, avg=21656.07, stdev=20219.84 00:30:27.486 clat percentiles (usec): 00:30:27.486 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[ 371], 20.00th=[ 383], 00:30:27.486 | 30.00th=[ 408], 40.00th=[ 457], 50.00th=[40633], 60.00th=[40633], 00:30:27.486 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:27.486 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:30:27.486 | 99.99th=[42206] 00:30:27.486 bw ( KiB/s): min= 512, max= 990, per=49.56%, avg=737.50, stdev=144.23, samples=20 00:30:27.486 iops : min= 128, max= 247, avg=184.35, stdev=36.01, samples=20 00:30:27.486 lat (usec) : 500=40.75%, 750=6.66% 00:30:27.486 lat (msec) : 10=0.22%, 50=52.38% 00:30:27.486 cpu : usr=91.16%, sys=8.04%, ctx=124, majf=0, minf=0 00:30:27.486 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.486 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.486 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:27.486 filename1: (groupid=0, jobs=1): err= 0: pid=109533: Wed Nov 6 14:00:08 2024 00:30:27.486 read: IOPS=187, BW=750KiB/s (768kB/s)(7504KiB/10005msec) 00:30:27.486 slat (nsec): min=5821, max=61406, avg=9768.33, stdev=7780.40 00:30:27.486 clat (usec): min=336, max=41380, avg=21299.10, stdev=20204.25 00:30:27.486 lat (usec): min=342, max=41392, avg=21308.87, stdev=20203.73 00:30:27.486 clat percentiles (usec): 00:30:27.486 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 383], 00:30:27.486 | 30.00th=[ 400], 40.00th=[ 453], 50.00th=[40633], 60.00th=[40633], 00:30:27.486 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:27.486 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:27.486 | 99.99th=[41157] 00:30:27.486 bw ( KiB/s): min= 576, max= 960, per=50.70%, avg=754.58, stdev=107.31, samples=19 00:30:27.486 iops : min= 144, max= 240, avg=188.63, stdev=26.84, samples=19 00:30:27.486 lat (usec) : 500=40.62%, 750=7.57% 00:30:27.486 lat (msec) : 4=0.21%, 50=51.60% 00:30:27.486 cpu : usr=91.88%, sys=7.63%, ctx=14, majf=0, minf=0 00:30:27.486 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.486 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.486 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:27.486 00:30:27.486 Run status group 0 (all jobs): 00:30:27.486 READ: bw=1487KiB/s (1523kB/s), 738KiB/s-750KiB/s (756kB/s-768kB/s), io=14.5MiB (15.3MB), run=10005-10017msec 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.486 00:30:27.486 real 0m11.299s 00:30:27.486 user 0m19.217s 00:30:27.486 sys 0m1.955s 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:27.486 ************************************ 00:30:27.486 14:00:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:27.486 END TEST fio_dif_1_multi_subsystems 00:30:27.486 ************************************ 00:30:27.486 14:00:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:27.486 14:00:08 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:27.486 14:00:08 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:27.486 14:00:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:27.486 ************************************ 00:30:27.486 START TEST fio_dif_rand_params 00:30:27.486 ************************************ 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.486 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.746 bdev_null0 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.746 [2024-11-06 14:00:08.459496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:27.746 { 00:30:27.746 "params": { 00:30:27.746 "name": "Nvme$subsystem", 00:30:27.746 "trtype": "$TEST_TRANSPORT", 00:30:27.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.746 "adrfam": "ipv4", 00:30:27.746 "trsvcid": "$NVMF_PORT", 00:30:27.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.746 "hdgst": ${hdgst:-false}, 00:30:27.746 "ddgst": ${ddgst:-false} 00:30:27.746 }, 00:30:27.746 "method": "bdev_nvme_attach_controller" 00:30:27.746 } 00:30:27.746 EOF 00:30:27.746 )") 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:27.746 "params": { 00:30:27.746 "name": "Nvme0", 00:30:27.746 "trtype": "tcp", 00:30:27.746 "traddr": "10.0.0.3", 00:30:27.746 "adrfam": "ipv4", 00:30:27.746 "trsvcid": "4420", 00:30:27.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:27.746 "hdgst": false, 00:30:27.746 "ddgst": false 00:30:27.746 }, 00:30:27.746 "method": "bdev_nvme_attach_controller" 00:30:27.746 }' 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:27.746 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:27.747 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:27.747 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:27.747 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:27.747 14:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.006 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:28.006 ... 00:30:28.006 fio-3.35 00:30:28.006 Starting 3 threads 00:30:34.582 00:30:34.582 filename0: (groupid=0, jobs=1): err= 0: pid=109689: Wed Nov 6 14:00:14 2024 00:30:34.582 read: IOPS=327, BW=40.9MiB/s (42.9MB/s)(205MiB/5004msec) 00:30:34.582 slat (nsec): min=5847, max=32460, avg=10195.95, stdev=2562.99 00:30:34.582 clat (usec): min=4358, max=52046, avg=9155.97, stdev=3144.48 00:30:34.582 lat (usec): min=4364, max=52060, avg=9166.16, stdev=3144.48 00:30:34.582 clat percentiles (usec): 00:30:34.582 | 1.00th=[ 6128], 5.00th=[ 7963], 10.00th=[ 8291], 20.00th=[ 8586], 00:30:34.582 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:30:34.582 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9765], 00:30:34.582 | 99.00th=[10421], 99.50th=[49546], 99.90th=[51119], 99.95th=[52167], 00:30:34.582 | 99.99th=[52167] 00:30:34.582 bw ( KiB/s): min=38912, max=44544, per=37.54%, avg=42183.11, stdev=1711.46, samples=9 00:30:34.582 iops : min= 304, max= 348, avg=329.56, stdev=13.37, samples=9 00:30:34.582 lat (msec) : 10=97.31%, 20=2.14%, 50=0.18%, 100=0.37% 00:30:34.582 cpu : usr=89.55%, sys=8.97%, ctx=8, majf=0, minf=9 00:30:34.582 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.582 issued rwts: total=1637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.582 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:34.582 filename0: (groupid=0, jobs=1): err= 0: pid=109690: Wed Nov 6 14:00:14 2024 00:30:34.582 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(196MiB/5004msec) 00:30:34.582 slat (nsec): min=5856, max=54815, avg=10216.03, stdev=3098.21 00:30:34.582 clat (usec): min=4401, max=50278, avg=9577.47, stdev=2023.74 00:30:34.582 lat (usec): min=4408, max=50294, avg=9587.68, stdev=2023.78 00:30:34.582 clat percentiles (usec): 00:30:34.582 | 1.00th=[ 5473], 5.00th=[ 7963], 10.00th=[ 8356], 20.00th=[ 8848], 00:30:34.582 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:30:34.582 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10421], 95.00th=[10683], 00:30:34.582 | 99.00th=[11338], 99.50th=[11469], 99.90th=[49021], 99.95th=[50070], 00:30:34.582 | 99.99th=[50070] 00:30:34.582 bw ( KiB/s): min=36608, max=41984, per=35.46%, avg=39850.67, stdev=1481.71, samples=9 00:30:34.582 iops : min= 286, max= 328, avg=311.33, stdev=11.58, samples=9 00:30:34.582 lat (msec) : 10=69.07%, 20=30.73%, 50=0.13%, 100=0.06% 00:30:34.582 cpu : usr=88.67%, sys=9.87%, ctx=30, majf=0, minf=0 00:30:34.582 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.582 issued rwts: total=1565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.582 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:34.582 filename0: (groupid=0, jobs=1): err= 0: pid=109691: Wed Nov 6 14:00:14 2024 00:30:34.582 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5004msec) 00:30:34.582 slat (nsec): min=5875, max=23123, avg=8530.45, stdev=3317.32 00:30:34.582 clat (usec): min=5715, max=14667, avg=12583.62, stdev=1247.60 00:30:34.582 lat (usec): min=5721, max=14680, avg=12592.15, stdev=1247.57 00:30:34.582 clat percentiles (usec): 00:30:34.582 | 1.00th=[ 7308], 5.00th=[ 9241], 10.00th=[11994], 20.00th=[12256], 00:30:34.582 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:30:34.582 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13698], 00:30:34.582 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14091], 99.95th=[14615], 00:30:34.582 | 99.99th=[14615] 00:30:34.582 bw ( KiB/s): min=29184, max=32256, per=26.96%, avg=30293.33, stdev=1024.00, samples=9 00:30:34.582 iops : min= 228, max= 252, avg=236.67, stdev= 8.00, samples=9 00:30:34.582 lat (msec) : 10=5.04%, 20=94.96% 00:30:34.582 cpu : usr=90.15%, sys=8.57%, ctx=4, majf=0, minf=0 00:30:34.582 IO depths : 1=30.6%, 2=69.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.582 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.582 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:34.582 00:30:34.582 Run status group 0 (all jobs): 00:30:34.582 READ: bw=110MiB/s (115MB/s), 29.8MiB/s-40.9MiB/s (31.2MB/s-42.9MB/s), io=549MiB (576MB), run=5004-5004msec 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.582 bdev_null0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.582 [2024-11-06 14:00:14.734115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:34.582 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.583 bdev_null1 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.583 bdev_null2 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.583 { 00:30:34.583 "params": { 00:30:34.583 "name": "Nvme$subsystem", 00:30:34.583 "trtype": "$TEST_TRANSPORT", 00:30:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.583 "adrfam": "ipv4", 00:30:34.583 "trsvcid": "$NVMF_PORT", 00:30:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.583 "hdgst": ${hdgst:-false}, 00:30:34.583 "ddgst": ${ddgst:-false} 00:30:34.583 }, 00:30:34.583 "method": "bdev_nvme_attach_controller" 00:30:34.583 } 00:30:34.583 EOF 00:30:34.583 )") 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.583 { 00:30:34.583 "params": { 00:30:34.583 "name": "Nvme$subsystem", 00:30:34.583 "trtype": "$TEST_TRANSPORT", 00:30:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.583 "adrfam": "ipv4", 00:30:34.583 "trsvcid": "$NVMF_PORT", 00:30:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.583 "hdgst": ${hdgst:-false}, 00:30:34.583 "ddgst": ${ddgst:-false} 00:30:34.583 }, 00:30:34.583 "method": "bdev_nvme_attach_controller" 00:30:34.583 } 00:30:34.583 EOF 00:30:34.583 )") 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.583 { 00:30:34.583 "params": { 00:30:34.583 "name": "Nvme$subsystem", 00:30:34.583 "trtype": "$TEST_TRANSPORT", 00:30:34.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.583 "adrfam": "ipv4", 00:30:34.583 "trsvcid": "$NVMF_PORT", 00:30:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.583 "hdgst": ${hdgst:-false}, 00:30:34.583 "ddgst": ${ddgst:-false} 00:30:34.583 }, 00:30:34.583 "method": "bdev_nvme_attach_controller" 00:30:34.583 } 00:30:34.583 EOF 00:30:34.583 )") 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.583 "params": { 00:30:34.583 "name": "Nvme0", 00:30:34.583 "trtype": "tcp", 00:30:34.583 "traddr": "10.0.0.3", 00:30:34.583 "adrfam": "ipv4", 00:30:34.583 "trsvcid": "4420", 00:30:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:34.583 "hdgst": false, 00:30:34.583 "ddgst": false 00:30:34.583 }, 00:30:34.583 "method": "bdev_nvme_attach_controller" 00:30:34.583 },{ 00:30:34.583 "params": { 00:30:34.583 "name": "Nvme1", 00:30:34.583 "trtype": "tcp", 00:30:34.583 "traddr": "10.0.0.3", 00:30:34.583 "adrfam": "ipv4", 00:30:34.583 "trsvcid": "4420", 00:30:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.583 "hdgst": false, 00:30:34.583 "ddgst": false 00:30:34.583 }, 00:30:34.583 "method": "bdev_nvme_attach_controller" 00:30:34.583 },{ 00:30:34.583 "params": { 00:30:34.583 "name": "Nvme2", 00:30:34.583 "trtype": "tcp", 00:30:34.583 "traddr": "10.0.0.3", 00:30:34.583 "adrfam": "ipv4", 00:30:34.583 "trsvcid": "4420", 00:30:34.583 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:34.583 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:34.583 "hdgst": false, 00:30:34.583 "ddgst": false 00:30:34.583 }, 00:30:34.583 "method": "bdev_nvme_attach_controller" 00:30:34.583 }' 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:34.583 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:34.584 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:34.584 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:34.584 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:34.584 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:34.584 14:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.584 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:34.584 ... 00:30:34.584 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:34.584 ... 00:30:34.584 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:34.584 ... 00:30:34.584 fio-3.35 00:30:34.584 Starting 24 threads 00:30:46.811 00:30:46.811 filename0: (groupid=0, jobs=1): err= 0: pid=109786: Wed Nov 6 14:00:26 2024 00:30:46.811 read: IOPS=286, BW=1145KiB/s (1173kB/s)(11.2MiB/10024msec) 00:30:46.811 slat (usec): min=3, max=8019, avg=16.67, stdev=198.63 00:30:46.811 clat (msec): min=22, max=123, avg=55.72, stdev=19.07 00:30:46.811 lat (msec): min=22, max=123, avg=55.74, stdev=19.07 00:30:46.811 clat percentiles (msec): 00:30:46.811 | 1.00th=[ 24], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 39], 00:30:46.811 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 55], 60.00th=[ 59], 00:30:46.811 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 90], 00:30:46.811 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:30:46.811 | 99.99th=[ 125] 00:30:46.811 bw ( KiB/s): min= 640, max= 1504, per=3.95%, avg=1144.45, stdev=244.85, samples=20 00:30:46.811 iops : min= 160, max= 376, avg=286.05, stdev=61.22, samples=20 00:30:46.811 lat (msec) : 50=43.03%, 100=54.01%, 250=2.96% 00:30:46.811 cpu : usr=36.78%, sys=1.45%, ctx=1137, majf=0, minf=9 00:30:46.811 IO depths : 1=1.2%, 2=2.5%, 4=9.5%, 8=73.8%, 16=13.0%, 32=0.0%, >=64=0.0% 00:30:46.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 complete : 0=0.0%, 4=90.0%, 8=6.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 issued rwts: total=2870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.811 filename0: (groupid=0, jobs=1): err= 0: pid=109787: Wed Nov 6 14:00:26 2024 00:30:46.811 read: IOPS=295, BW=1181KiB/s (1209kB/s)(11.5MiB/10016msec) 00:30:46.811 slat (usec): min=3, max=8018, avg=12.40, stdev=148.58 00:30:46.811 clat (msec): min=22, max=136, avg=54.10, stdev=18.00 00:30:46.811 lat (msec): min=22, max=136, avg=54.11, stdev=18.00 00:30:46.811 clat percentiles (msec): 00:30:46.811 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 37], 00:30:46.811 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 58], 00:30:46.811 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 75], 95.00th=[ 86], 00:30:46.811 | 99.00th=[ 110], 99.50th=[ 118], 99.90th=[ 138], 99.95th=[ 138], 00:30:46.811 | 99.99th=[ 138] 00:30:46.811 bw ( KiB/s): min= 696, max= 1632, per=4.07%, avg=1179.80, stdev=233.23, samples=20 00:30:46.811 iops : min= 174, max= 408, avg=294.95, stdev=58.31, samples=20 00:30:46.811 lat (msec) : 50=47.09%, 100=50.71%, 250=2.20% 00:30:46.811 cpu : usr=37.17%, sys=1.38%, ctx=1148, majf=0, minf=9 00:30:46.811 IO depths : 1=0.9%, 2=2.0%, 4=8.4%, 8=75.8%, 16=13.0%, 32=0.0%, >=64=0.0% 00:30:46.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 issued rwts: total=2956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.811 filename0: (groupid=0, jobs=1): err= 0: pid=109788: Wed Nov 6 14:00:26 2024 00:30:46.811 read: IOPS=310, BW=1243KiB/s (1273kB/s)(12.2MiB/10023msec) 00:30:46.811 slat (usec): min=3, max=4019, avg=14.74, stdev=126.75 00:30:46.811 clat (msec): min=7, max=113, avg=51.34, stdev=18.28 00:30:46.811 lat (msec): min=7, max=113, avg=51.36, stdev=18.28 00:30:46.811 clat percentiles (msec): 00:30:46.811 | 1.00th=[ 13], 5.00th=[ 22], 10.00th=[ 32], 20.00th=[ 36], 00:30:46.811 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 51], 60.00th=[ 55], 00:30:46.811 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 74], 95.00th=[ 84], 00:30:46.811 | 99.00th=[ 103], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 114], 00:30:46.811 | 99.99th=[ 114] 00:30:46.811 bw ( KiB/s): min= 768, max= 2416, per=4.25%, avg=1229.32, stdev=334.98, samples=19 00:30:46.811 iops : min= 192, max= 604, avg=307.32, stdev=83.75, samples=19 00:30:46.811 lat (msec) : 10=0.51%, 20=3.60%, 50=46.27%, 100=48.43%, 250=1.19% 00:30:46.811 cpu : usr=45.20%, sys=2.05%, ctx=1878, majf=0, minf=9 00:30:46.811 IO depths : 1=1.6%, 2=3.5%, 4=10.8%, 8=72.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:46.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 issued rwts: total=3114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.811 filename0: (groupid=0, jobs=1): err= 0: pid=109789: Wed Nov 6 14:00:26 2024 00:30:46.811 read: IOPS=300, BW=1201KiB/s (1230kB/s)(11.8MiB/10024msec) 00:30:46.811 slat (usec): min=5, max=8022, avg=19.10, stdev=263.20 00:30:46.811 clat (msec): min=21, max=132, avg=53.14, stdev=18.36 00:30:46.811 lat (msec): min=21, max=132, avg=53.16, stdev=18.36 00:30:46.811 clat percentiles (msec): 00:30:46.811 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 36], 00:30:46.811 | 30.00th=[ 40], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 59], 00:30:46.811 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 77], 95.00th=[ 85], 00:30:46.811 | 99.00th=[ 107], 99.50th=[ 120], 99.90th=[ 133], 99.95th=[ 133], 00:30:46.811 | 99.99th=[ 133] 00:30:46.811 bw ( KiB/s): min= 816, max= 1520, per=4.14%, avg=1198.85, stdev=217.56, samples=20 00:30:46.811 iops : min= 204, max= 380, avg=299.65, stdev=54.39, samples=20 00:30:46.811 lat (msec) : 50=50.76%, 100=47.87%, 250=1.36% 00:30:46.811 cpu : usr=31.93%, sys=1.25%, ctx=935, majf=0, minf=9 00:30:46.811 IO depths : 1=0.3%, 2=0.7%, 4=6.4%, 8=78.9%, 16=13.7%, 32=0.0%, >=64=0.0% 00:30:46.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 complete : 0=0.0%, 4=88.9%, 8=7.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 issued rwts: total=3010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.811 filename0: (groupid=0, jobs=1): err= 0: pid=109790: Wed Nov 6 14:00:26 2024 00:30:46.811 read: IOPS=293, BW=1175KiB/s (1203kB/s)(11.5MiB/10030msec) 00:30:46.811 slat (usec): min=5, max=8019, avg=22.95, stdev=329.67 00:30:46.811 clat (msec): min=21, max=131, avg=54.29, stdev=17.81 00:30:46.811 lat (msec): min=21, max=131, avg=54.31, stdev=17.83 00:30:46.811 clat percentiles (msec): 00:30:46.811 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 37], 00:30:46.811 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 61], 00:30:46.811 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 73], 95.00th=[ 85], 00:30:46.811 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 132], 99.95th=[ 132], 00:30:46.811 | 99.99th=[ 132] 00:30:46.811 bw ( KiB/s): min= 856, max= 1848, per=4.06%, avg=1176.00, stdev=226.51, samples=20 00:30:46.811 iops : min= 214, max= 462, avg=294.00, stdev=56.63, samples=20 00:30:46.811 lat (msec) : 50=46.42%, 100=52.29%, 250=1.29% 00:30:46.811 cpu : usr=31.86%, sys=1.34%, ctx=999, majf=0, minf=9 00:30:46.811 IO depths : 1=1.0%, 2=2.1%, 4=10.2%, 8=74.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:46.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.811 issued rwts: total=2947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.811 filename0: (groupid=0, jobs=1): err= 0: pid=109791: Wed Nov 6 14:00:26 2024 00:30:46.811 read: IOPS=325, BW=1303KiB/s (1335kB/s)(12.8MiB/10030msec) 00:30:46.812 slat (usec): min=4, max=4015, avg=10.57, stdev=70.28 00:30:46.812 clat (msec): min=12, max=109, avg=49.01, stdev=16.50 00:30:46.812 lat (msec): min=12, max=109, avg=49.02, stdev=16.51 00:30:46.812 clat percentiles (msec): 00:30:46.812 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 31], 20.00th=[ 35], 00:30:46.812 | 30.00th=[ 38], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 52], 00:30:46.812 | 70.00th=[ 57], 80.00th=[ 62], 90.00th=[ 71], 95.00th=[ 82], 00:30:46.812 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 110], 99.95th=[ 110], 00:30:46.812 | 99.99th=[ 110] 00:30:46.812 bw ( KiB/s): min= 896, max= 2135, per=4.50%, avg=1302.35, stdev=269.08, samples=20 00:30:46.812 iops : min= 224, max= 533, avg=325.55, stdev=67.15, samples=20 00:30:46.812 lat (msec) : 20=1.47%, 50=56.03%, 100=42.04%, 250=0.46% 00:30:46.812 cpu : usr=40.35%, sys=1.72%, ctx=1198, majf=0, minf=9 00:30:46.812 IO depths : 1=0.6%, 2=1.4%, 4=8.4%, 8=76.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:46.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 issued rwts: total=3268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.812 filename0: (groupid=0, jobs=1): err= 0: pid=109792: Wed Nov 6 14:00:26 2024 00:30:46.812 read: IOPS=300, BW=1201KiB/s (1230kB/s)(11.8MiB/10034msec) 00:30:46.812 slat (usec): min=3, max=8043, avg=17.96, stdev=219.24 00:30:46.812 clat (msec): min=21, max=137, avg=53.15, stdev=18.28 00:30:46.812 lat (msec): min=21, max=137, avg=53.17, stdev=18.28 00:30:46.812 clat percentiles (msec): 00:30:46.812 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 38], 00:30:46.812 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 55], 00:30:46.812 | 70.00th=[ 59], 80.00th=[ 67], 90.00th=[ 78], 95.00th=[ 86], 00:30:46.812 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 118], 99.95th=[ 118], 00:30:46.812 | 99.99th=[ 138] 00:30:46.812 bw ( KiB/s): min= 728, max= 1616, per=4.14%, avg=1199.40, stdev=246.49, samples=20 00:30:46.812 iops : min= 182, max= 404, avg=299.85, stdev=61.62, samples=20 00:30:46.812 lat (msec) : 50=46.56%, 100=50.78%, 250=2.66% 00:30:46.812 cpu : usr=42.72%, sys=1.54%, ctx=1209, majf=0, minf=9 00:30:46.812 IO depths : 1=1.4%, 2=3.3%, 4=11.8%, 8=71.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:46.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 issued rwts: total=3013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.812 filename0: (groupid=0, jobs=1): err= 0: pid=109793: Wed Nov 6 14:00:26 2024 00:30:46.812 read: IOPS=323, BW=1295KiB/s (1326kB/s)(12.7MiB/10025msec) 00:30:46.812 slat (usec): min=4, max=8022, avg=17.14, stdev=222.39 00:30:46.812 clat (msec): min=11, max=119, avg=49.29, stdev=16.74 00:30:46.812 lat (msec): min=11, max=119, avg=49.31, stdev=16.74 00:30:46.812 clat percentiles (msec): 00:30:46.812 | 1.00th=[ 14], 5.00th=[ 25], 10.00th=[ 32], 20.00th=[ 35], 00:30:46.812 | 30.00th=[ 39], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 52], 00:30:46.812 | 70.00th=[ 58], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 81], 00:30:46.812 | 99.00th=[ 96], 99.50th=[ 106], 99.90th=[ 121], 99.95th=[ 121], 00:30:46.812 | 99.99th=[ 121] 00:30:46.812 bw ( KiB/s): min= 896, max= 1960, per=4.47%, avg=1293.60, stdev=245.59, samples=20 00:30:46.812 iops : min= 224, max= 490, avg=323.40, stdev=61.40, samples=20 00:30:46.812 lat (msec) : 20=2.03%, 50=54.21%, 100=43.11%, 250=0.65% 00:30:46.812 cpu : usr=38.98%, sys=1.75%, ctx=1122, majf=0, minf=9 00:30:46.812 IO depths : 1=0.6%, 2=1.4%, 4=8.2%, 8=76.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:46.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 issued rwts: total=3245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.812 filename1: (groupid=0, jobs=1): err= 0: pid=109794: Wed Nov 6 14:00:26 2024 00:30:46.812 read: IOPS=320, BW=1282KiB/s (1313kB/s)(12.6MiB/10041msec) 00:30:46.812 slat (usec): min=5, max=8040, avg=18.73, stdev=200.43 00:30:46.812 clat (msec): min=13, max=103, avg=49.73, stdev=16.09 00:30:46.812 lat (msec): min=13, max=103, avg=49.75, stdev=16.09 00:30:46.812 clat percentiles (msec): 00:30:46.812 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 36], 00:30:46.812 | 30.00th=[ 39], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 55], 00:30:46.812 | 70.00th=[ 57], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 80], 00:30:46.812 | 99.00th=[ 90], 99.50th=[ 95], 99.90th=[ 105], 99.95th=[ 105], 00:30:46.812 | 99.99th=[ 105] 00:30:46.812 bw ( KiB/s): min= 768, max= 2087, per=4.43%, avg=1282.35, stdev=273.99, samples=20 00:30:46.812 iops : min= 192, max= 521, avg=320.55, stdev=68.38, samples=20 00:30:46.812 lat (msec) : 20=0.99%, 50=51.88%, 100=47.00%, 250=0.12% 00:30:46.812 cpu : usr=42.57%, sys=1.26%, ctx=1215, majf=0, minf=9 00:30:46.812 IO depths : 1=1.6%, 2=3.6%, 4=12.0%, 8=71.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:46.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 issued rwts: total=3219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.812 filename1: (groupid=0, jobs=1): err= 0: pid=109795: Wed Nov 6 14:00:26 2024 00:30:46.812 read: IOPS=313, BW=1255KiB/s (1285kB/s)(12.3MiB/10013msec) 00:30:46.812 slat (usec): min=3, max=4029, avg=13.77, stdev=113.43 00:30:46.812 clat (msec): min=16, max=118, avg=50.91, stdev=16.75 00:30:46.812 lat (msec): min=16, max=119, avg=50.92, stdev=16.75 00:30:46.812 clat percentiles (msec): 00:30:46.812 | 1.00th=[ 24], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 36], 00:30:46.812 | 30.00th=[ 40], 40.00th=[ 46], 50.00th=[ 49], 60.00th=[ 54], 00:30:46.812 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 84], 00:30:46.812 | 99.00th=[ 95], 99.50th=[ 107], 99.90th=[ 120], 99.95th=[ 120], 00:30:46.812 | 99.99th=[ 120] 00:30:46.812 bw ( KiB/s): min= 768, max= 1728, per=4.35%, avg=1258.53, stdev=236.15, samples=19 00:30:46.812 iops : min= 192, max= 432, avg=314.58, stdev=59.05, samples=19 00:30:46.812 lat (msec) : 20=0.29%, 50=53.04%, 100=45.88%, 250=0.80% 00:30:46.812 cpu : usr=39.25%, sys=1.48%, ctx=1545, majf=0, minf=9 00:30:46.812 IO depths : 1=1.5%, 2=3.2%, 4=10.9%, 8=72.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:46.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 complete : 0=0.0%, 4=90.4%, 8=5.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 issued rwts: total=3141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.812 filename1: (groupid=0, jobs=1): err= 0: pid=109796: Wed Nov 6 14:00:26 2024 00:30:46.812 read: IOPS=312, BW=1248KiB/s (1278kB/s)(12.2MiB/10041msec) 00:30:46.812 slat (usec): min=3, max=8019, avg=14.81, stdev=175.47 00:30:46.812 clat (msec): min=12, max=135, avg=51.13, stdev=18.50 00:30:46.812 lat (msec): min=12, max=135, avg=51.15, stdev=18.50 00:30:46.812 clat percentiles (msec): 00:30:46.812 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 36], 00:30:46.812 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 54], 00:30:46.812 | 70.00th=[ 58], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 88], 00:30:46.812 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:30:46.812 | 99.99th=[ 136] 00:30:46.812 bw ( KiB/s): min= 640, max= 1708, per=4.31%, avg=1247.00, stdev=254.53, samples=20 00:30:46.812 iops : min= 160, max= 427, avg=311.75, stdev=63.63, samples=20 00:30:46.812 lat (msec) : 20=1.31%, 50=52.90%, 100=43.40%, 250=2.39% 00:30:46.812 cpu : usr=39.72%, sys=1.37%, ctx=1172, majf=0, minf=9 00:30:46.812 IO depths : 1=0.8%, 2=1.5%, 4=7.8%, 8=76.8%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:46.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 complete : 0=0.0%, 4=89.4%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 issued rwts: total=3134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.812 filename1: (groupid=0, jobs=1): err= 0: pid=109797: Wed Nov 6 14:00:26 2024 00:30:46.812 read: IOPS=274, BW=1098KiB/s (1125kB/s)(10.7MiB/10001msec) 00:30:46.812 slat (usec): min=3, max=8018, avg=15.06, stdev=171.26 00:30:46.812 clat (usec): min=1815, max=118607, avg=58159.02, stdev=17816.61 00:30:46.812 lat (usec): min=1821, max=118613, avg=58174.08, stdev=17820.81 00:30:46.812 clat percentiles (msec): 00:30:46.812 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 47], 00:30:46.812 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 61], 00:30:46.812 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 87], 00:30:46.812 | 99.00th=[ 103], 99.50th=[ 110], 99.90th=[ 120], 99.95th=[ 120], 00:30:46.812 | 99.99th=[ 120] 00:30:46.812 bw ( KiB/s): min= 656, max= 1580, per=3.76%, avg=1087.79, stdev=203.39, samples=19 00:30:46.812 iops : min= 164, max= 395, avg=271.95, stdev=50.85, samples=19 00:30:46.812 lat (msec) : 2=0.07%, 10=0.58%, 20=0.51%, 50=31.25%, 100=66.50% 00:30:46.812 lat (msec) : 250=1.09% 00:30:46.812 cpu : usr=36.10%, sys=1.30%, ctx=1044, majf=0, minf=9 00:30:46.812 IO depths : 1=2.0%, 2=4.9%, 4=15.6%, 8=66.7%, 16=10.8%, 32=0.0%, >=64=0.0% 00:30:46.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.812 issued rwts: total=2746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.812 filename1: (groupid=0, jobs=1): err= 0: pid=109798: Wed Nov 6 14:00:26 2024 00:30:46.812 read: IOPS=335, BW=1342KiB/s (1374kB/s)(13.2MiB/10043msec) 00:30:46.812 slat (usec): min=4, max=8018, avg=21.29, stdev=308.34 00:30:46.812 clat (usec): min=1283, max=120961, avg=47499.56, stdev=18545.52 00:30:46.812 lat (usec): min=1289, max=120984, avg=47520.85, stdev=18552.07 00:30:46.812 clat percentiles (msec): 00:30:46.812 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 23], 20.00th=[ 34], 00:30:46.812 | 30.00th=[ 37], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 52], 00:30:46.812 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 77], 00:30:46.812 | 99.00th=[ 101], 99.50th=[ 105], 99.90th=[ 122], 99.95th=[ 122], 00:30:46.812 | 99.99th=[ 122] 00:30:46.812 bw ( KiB/s): min= 816, max= 3187, per=4.63%, avg=1340.55, stdev=472.52, samples=20 00:30:46.812 iops : min= 204, max= 796, avg=335.10, stdev=117.97, samples=20 00:30:46.812 lat (msec) : 2=0.47%, 4=0.47%, 10=0.47%, 20=7.30%, 50=49.87% 00:30:46.812 lat (msec) : 100=40.52%, 250=0.89% 00:30:46.813 cpu : usr=39.42%, sys=1.57%, ctx=1094, majf=0, minf=9 00:30:46.813 IO depths : 1=1.3%, 2=2.7%, 4=10.0%, 8=73.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:46.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 issued rwts: total=3369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.813 filename1: (groupid=0, jobs=1): err= 0: pid=109799: Wed Nov 6 14:00:26 2024 00:30:46.813 read: IOPS=342, BW=1369KiB/s (1402kB/s)(13.4MiB/10029msec) 00:30:46.813 slat (usec): min=3, max=4021, avg=14.57, stdev=120.04 00:30:46.813 clat (usec): min=8791, max=95613, avg=46574.30, stdev=14917.96 00:30:46.813 lat (usec): min=8801, max=95620, avg=46588.87, stdev=14919.33 00:30:46.813 clat percentiles (usec): 00:30:46.813 | 1.00th=[18744], 5.00th=[23987], 10.00th=[30540], 20.00th=[33817], 00:30:46.813 | 30.00th=[36963], 40.00th=[40109], 50.00th=[46400], 60.00th=[49021], 00:30:46.813 | 70.00th=[54789], 80.00th=[59507], 90.00th=[65274], 95.00th=[72877], 00:30:46.813 | 99.00th=[86508], 99.50th=[91751], 99.90th=[95945], 99.95th=[95945], 00:30:46.813 | 99.99th=[95945] 00:30:46.813 bw ( KiB/s): min= 952, max= 2064, per=4.73%, avg=1369.20, stdev=271.36, samples=20 00:30:46.813 iops : min= 238, max= 516, avg=342.30, stdev=67.84, samples=20 00:30:46.813 lat (msec) : 10=0.20%, 20=1.78%, 50=59.42%, 100=38.60% 00:30:46.813 cpu : usr=42.17%, sys=1.54%, ctx=1381, majf=0, minf=9 00:30:46.813 IO depths : 1=1.2%, 2=2.6%, 4=9.6%, 8=74.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:46.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 issued rwts: total=3433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.813 filename1: (groupid=0, jobs=1): err= 0: pid=109800: Wed Nov 6 14:00:26 2024 00:30:46.813 read: IOPS=282, BW=1130KiB/s (1157kB/s)(11.1MiB/10011msec) 00:30:46.813 slat (usec): min=3, max=8023, avg=12.89, stdev=150.80 00:30:46.813 clat (msec): min=15, max=108, avg=56.48, stdev=16.69 00:30:46.813 lat (msec): min=15, max=108, avg=56.50, stdev=16.69 00:30:46.813 clat percentiles (msec): 00:30:46.813 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 35], 20.00th=[ 46], 00:30:46.813 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 00:30:46.813 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 85], 00:30:46.813 | 99.00th=[ 96], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 00:30:46.813 | 99.99th=[ 109] 00:30:46.813 bw ( KiB/s): min= 736, max= 1816, per=3.87%, avg=1120.79, stdev=226.94, samples=19 00:30:46.813 iops : min= 184, max= 454, avg=280.16, stdev=56.76, samples=19 00:30:46.813 lat (msec) : 20=0.21%, 50=37.82%, 100=61.29%, 250=0.67% 00:30:46.813 cpu : usr=31.93%, sys=1.34%, ctx=962, majf=0, minf=9 00:30:46.813 IO depths : 1=1.3%, 2=3.1%, 4=11.1%, 8=72.0%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:46.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 issued rwts: total=2829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.813 filename1: (groupid=0, jobs=1): err= 0: pid=109801: Wed Nov 6 14:00:26 2024 00:30:46.813 read: IOPS=271, BW=1086KiB/s (1112kB/s)(10.6MiB/10004msec) 00:30:46.813 slat (usec): min=3, max=8046, avg=13.52, stdev=172.36 00:30:46.813 clat (msec): min=5, max=125, avg=58.83, stdev=18.05 00:30:46.813 lat (msec): min=5, max=125, avg=58.85, stdev=18.05 00:30:46.813 clat percentiles (msec): 00:30:46.813 | 1.00th=[ 16], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 00:30:46.813 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 61], 00:30:46.813 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 86], 00:30:46.813 | 99.00th=[ 112], 99.50th=[ 126], 99.90th=[ 126], 99.95th=[ 126], 00:30:46.813 | 99.99th=[ 126] 00:30:46.813 bw ( KiB/s): min= 640, max= 1536, per=3.72%, avg=1076.63, stdev=205.30, samples=19 00:30:46.813 iops : min= 160, max= 384, avg=269.16, stdev=51.33, samples=19 00:30:46.813 lat (msec) : 10=0.59%, 20=0.59%, 50=28.19%, 100=68.35%, 250=2.28% 00:30:46.813 cpu : usr=35.37%, sys=1.52%, ctx=987, majf=0, minf=10 00:30:46.813 IO depths : 1=2.7%, 2=5.9%, 4=15.8%, 8=65.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:30:46.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 complete : 0=0.0%, 4=91.6%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 issued rwts: total=2717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.813 filename2: (groupid=0, jobs=1): err= 0: pid=109802: Wed Nov 6 14:00:26 2024 00:30:46.813 read: IOPS=276, BW=1107KiB/s (1133kB/s)(10.8MiB/10004msec) 00:30:46.813 slat (usec): min=3, max=4020, avg=12.55, stdev=100.72 00:30:46.813 clat (msec): min=4, max=137, avg=57.72, stdev=18.09 00:30:46.813 lat (msec): min=4, max=137, avg=57.74, stdev=18.09 00:30:46.813 clat percentiles (msec): 00:30:46.813 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 47], 00:30:46.813 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 61], 00:30:46.813 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 89], 00:30:46.813 | 99.00th=[ 112], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:30:46.813 | 99.99th=[ 138] 00:30:46.813 bw ( KiB/s): min= 640, max= 1498, per=3.78%, avg=1095.68, stdev=191.96, samples=19 00:30:46.813 iops : min= 160, max= 374, avg=273.89, stdev=47.93, samples=19 00:30:46.813 lat (msec) : 10=0.58%, 20=0.22%, 50=36.96%, 100=60.30%, 250=1.95% 00:30:46.813 cpu : usr=38.35%, sys=1.65%, ctx=1134, majf=0, minf=9 00:30:46.813 IO depths : 1=2.3%, 2=5.1%, 4=14.6%, 8=66.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:30:46.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 complete : 0=0.0%, 4=91.1%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.813 filename2: (groupid=0, jobs=1): err= 0: pid=109803: Wed Nov 6 14:00:26 2024 00:30:46.813 read: IOPS=277, BW=1111KiB/s (1137kB/s)(10.9MiB/10011msec) 00:30:46.813 slat (usec): min=3, max=8046, avg=30.27, stdev=390.44 00:30:46.813 clat (msec): min=12, max=123, avg=57.47, stdev=18.05 00:30:46.813 lat (msec): min=12, max=123, avg=57.50, stdev=18.06 00:30:46.813 clat percentiles (msec): 00:30:46.813 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 46], 00:30:46.813 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 61], 00:30:46.813 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 85], 00:30:46.813 | 99.00th=[ 110], 99.50th=[ 110], 99.90th=[ 124], 99.95th=[ 124], 00:30:46.813 | 99.99th=[ 124] 00:30:46.813 bw ( KiB/s): min= 768, max= 1587, per=3.80%, avg=1100.32, stdev=225.81, samples=19 00:30:46.813 iops : min= 192, max= 396, avg=275.00, stdev=56.38, samples=19 00:30:46.813 lat (msec) : 20=0.58%, 50=36.47%, 100=60.32%, 250=2.63% 00:30:46.813 cpu : usr=32.75%, sys=1.35%, ctx=919, majf=0, minf=9 00:30:46.813 IO depths : 1=1.9%, 2=4.2%, 4=13.2%, 8=69.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:46.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 issued rwts: total=2780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.813 filename2: (groupid=0, jobs=1): err= 0: pid=109804: Wed Nov 6 14:00:26 2024 00:30:46.813 read: IOPS=300, BW=1201KiB/s (1230kB/s)(11.8MiB/10027msec) 00:30:46.813 slat (usec): min=3, max=9020, avg=15.49, stdev=211.36 00:30:46.813 clat (msec): min=24, max=107, avg=53.16, stdev=14.84 00:30:46.813 lat (msec): min=24, max=107, avg=53.17, stdev=14.84 00:30:46.813 clat percentiles (msec): 00:30:46.813 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 37], 00:30:46.813 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 59], 00:30:46.813 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 79], 00:30:46.813 | 99.00th=[ 87], 99.50th=[ 95], 99.90th=[ 108], 99.95th=[ 108], 00:30:46.813 | 99.99th=[ 108] 00:30:46.813 bw ( KiB/s): min= 944, max= 1584, per=4.14%, avg=1197.45, stdev=169.28, samples=20 00:30:46.813 iops : min= 236, max= 396, avg=299.35, stdev=42.32, samples=20 00:30:46.813 lat (msec) : 50=49.60%, 100=50.27%, 250=0.13% 00:30:46.813 cpu : usr=33.68%, sys=1.42%, ctx=1197, majf=0, minf=9 00:30:46.813 IO depths : 1=0.7%, 2=1.5%, 4=7.7%, 8=77.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:30:46.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 issued rwts: total=3010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.813 filename2: (groupid=0, jobs=1): err= 0: pid=109805: Wed Nov 6 14:00:26 2024 00:30:46.813 read: IOPS=288, BW=1153KiB/s (1180kB/s)(11.3MiB/10008msec) 00:30:46.813 slat (usec): min=3, max=2333, avg=10.13, stdev=43.58 00:30:46.813 clat (msec): min=7, max=119, avg=55.45, stdev=19.93 00:30:46.813 lat (msec): min=7, max=119, avg=55.46, stdev=19.93 00:30:46.813 clat percentiles (msec): 00:30:46.813 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 32], 20.00th=[ 40], 00:30:46.813 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 58], 00:30:46.813 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 93], 00:30:46.813 | 99.00th=[ 112], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 120], 00:30:46.813 | 99.99th=[ 121] 00:30:46.813 bw ( KiB/s): min= 640, max= 2152, per=3.96%, avg=1146.95, stdev=303.99, samples=19 00:30:46.813 iops : min= 160, max= 538, avg=286.74, stdev=76.00, samples=19 00:30:46.813 lat (msec) : 10=0.55%, 20=3.88%, 50=31.10%, 100=61.13%, 250=3.33% 00:30:46.813 cpu : usr=41.11%, sys=1.85%, ctx=1213, majf=0, minf=9 00:30:46.813 IO depths : 1=2.4%, 2=5.3%, 4=14.8%, 8=66.8%, 16=10.7%, 32=0.0%, >=64=0.0% 00:30:46.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.813 issued rwts: total=2884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.813 filename2: (groupid=0, jobs=1): err= 0: pid=109806: Wed Nov 6 14:00:26 2024 00:30:46.813 read: IOPS=348, BW=1393KiB/s (1426kB/s)(13.6MiB/10022msec) 00:30:46.813 slat (usec): min=4, max=9019, avg=13.75, stdev=204.15 00:30:46.813 clat (msec): min=8, max=119, avg=45.82, stdev=16.58 00:30:46.813 lat (msec): min=8, max=119, avg=45.83, stdev=16.59 00:30:46.813 clat percentiles (msec): 00:30:46.813 | 1.00th=[ 13], 5.00th=[ 21], 10.00th=[ 27], 20.00th=[ 33], 00:30:46.813 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 45], 60.00th=[ 48], 00:30:46.814 | 70.00th=[ 55], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 73], 00:30:46.814 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 121], 00:30:46.814 | 99.99th=[ 121] 00:30:46.814 bw ( KiB/s): min= 912, max= 2136, per=4.81%, avg=1393.00, stdev=326.72, samples=20 00:30:46.814 iops : min= 228, max= 534, avg=348.25, stdev=81.68, samples=20 00:30:46.814 lat (msec) : 10=0.46%, 20=4.24%, 50=60.65%, 100=34.57%, 250=0.09% 00:30:46.814 cpu : usr=41.87%, sys=1.58%, ctx=1033, majf=0, minf=9 00:30:46.814 IO depths : 1=0.6%, 2=1.5%, 4=8.9%, 8=76.1%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:46.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.814 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.814 issued rwts: total=3489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.814 filename2: (groupid=0, jobs=1): err= 0: pid=109807: Wed Nov 6 14:00:26 2024 00:30:46.814 read: IOPS=274, BW=1099KiB/s (1125kB/s)(10.7MiB/10002msec) 00:30:46.814 slat (usec): min=3, max=8014, avg=13.04, stdev=152.83 00:30:46.814 clat (msec): min=19, max=119, avg=58.16, stdev=16.85 00:30:46.814 lat (msec): min=19, max=119, avg=58.17, stdev=16.85 00:30:46.814 clat percentiles (msec): 00:30:46.814 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 46], 00:30:46.814 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 60], 60.00th=[ 61], 00:30:46.814 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 85], 00:30:46.814 | 99.00th=[ 96], 99.50th=[ 107], 99.90th=[ 121], 99.95th=[ 121], 00:30:46.814 | 99.99th=[ 121] 00:30:46.814 bw ( KiB/s): min= 848, max= 1696, per=3.75%, avg=1084.63, stdev=179.31, samples=19 00:30:46.814 iops : min= 212, max= 424, avg=271.16, stdev=44.83, samples=19 00:30:46.814 lat (msec) : 20=0.25%, 50=37.63%, 100=61.32%, 250=0.80% 00:30:46.814 cpu : usr=31.78%, sys=1.30%, ctx=947, majf=0, minf=9 00:30:46.814 IO depths : 1=1.6%, 2=3.7%, 4=12.1%, 8=70.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:30:46.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.814 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.814 issued rwts: total=2748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.814 filename2: (groupid=0, jobs=1): err= 0: pid=109808: Wed Nov 6 14:00:26 2024 00:30:46.814 read: IOPS=319, BW=1277KiB/s (1308kB/s)(12.5MiB/10026msec) 00:30:46.814 slat (usec): min=5, max=8021, avg=19.95, stdev=282.99 00:30:46.814 clat (msec): min=11, max=143, avg=49.97, stdev=17.50 00:30:46.814 lat (msec): min=11, max=143, avg=49.99, stdev=17.50 00:30:46.814 clat percentiles (msec): 00:30:46.814 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 36], 00:30:46.814 | 30.00th=[ 38], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 51], 00:30:46.814 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 84], 00:30:46.814 | 99.00th=[ 101], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:30:46.814 | 99.99th=[ 144] 00:30:46.814 bw ( KiB/s): min= 768, max= 1872, per=4.40%, avg=1274.40, stdev=231.97, samples=20 00:30:46.814 iops : min= 192, max= 468, avg=318.60, stdev=57.99, samples=20 00:30:46.814 lat (msec) : 20=1.84%, 50=57.43%, 100=39.98%, 250=0.75% 00:30:46.814 cpu : usr=31.70%, sys=1.49%, ctx=979, majf=0, minf=9 00:30:46.814 IO depths : 1=0.8%, 2=1.9%, 4=8.5%, 8=76.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:46.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.814 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.814 issued rwts: total=3202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.814 filename2: (groupid=0, jobs=1): err= 0: pid=109809: Wed Nov 6 14:00:26 2024 00:30:46.814 read: IOPS=275, BW=1102KiB/s (1128kB/s)(10.8MiB/10016msec) 00:30:46.814 slat (usec): min=3, max=8025, avg=18.92, stdev=264.16 00:30:46.814 clat (msec): min=23, max=119, avg=57.92, stdev=17.27 00:30:46.814 lat (msec): min=23, max=119, avg=57.94, stdev=17.27 00:30:46.814 clat percentiles (msec): 00:30:46.814 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 46], 00:30:46.814 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:30:46.814 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 85], 00:30:46.814 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:30:46.814 | 99.99th=[ 121] 00:30:46.814 bw ( KiB/s): min= 768, max= 1496, per=3.79%, avg=1097.20, stdev=181.93, samples=20 00:30:46.814 iops : min= 192, max= 374, avg=274.30, stdev=45.48, samples=20 00:30:46.814 lat (msec) : 50=36.43%, 100=62.31%, 250=1.27% 00:30:46.814 cpu : usr=31.67%, sys=1.39%, ctx=945, majf=0, minf=9 00:30:46.814 IO depths : 1=1.5%, 2=3.4%, 4=12.1%, 8=71.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:46.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.814 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.814 issued rwts: total=2759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:46.814 00:30:46.814 Run status group 0 (all jobs): 00:30:46.814 READ: bw=28.3MiB/s (29.6MB/s), 1086KiB/s-1393KiB/s (1112kB/s-1426kB/s), io=284MiB (298MB), run=10001-10043msec 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 bdev_null0 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.814 [2024-11-06 14:00:26.449147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.814 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.815 bdev_null1 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.815 { 00:30:46.815 "params": { 00:30:46.815 "name": "Nvme$subsystem", 00:30:46.815 "trtype": "$TEST_TRANSPORT", 00:30:46.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.815 "adrfam": "ipv4", 00:30:46.815 "trsvcid": "$NVMF_PORT", 00:30:46.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.815 "hdgst": ${hdgst:-false}, 00:30:46.815 "ddgst": ${ddgst:-false} 00:30:46.815 }, 00:30:46.815 "method": "bdev_nvme_attach_controller" 00:30:46.815 } 00:30:46.815 EOF 00:30:46.815 )") 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.815 { 00:30:46.815 "params": { 00:30:46.815 "name": "Nvme$subsystem", 00:30:46.815 "trtype": "$TEST_TRANSPORT", 00:30:46.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.815 "adrfam": "ipv4", 00:30:46.815 "trsvcid": "$NVMF_PORT", 00:30:46.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.815 "hdgst": ${hdgst:-false}, 00:30:46.815 "ddgst": ${ddgst:-false} 00:30:46.815 }, 00:30:46.815 "method": "bdev_nvme_attach_controller" 00:30:46.815 } 00:30:46.815 EOF 00:30:46.815 )") 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:46.815 "params": { 00:30:46.815 "name": "Nvme0", 00:30:46.815 "trtype": "tcp", 00:30:46.815 "traddr": "10.0.0.3", 00:30:46.815 "adrfam": "ipv4", 00:30:46.815 "trsvcid": "4420", 00:30:46.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.815 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.815 "hdgst": false, 00:30:46.815 "ddgst": false 00:30:46.815 }, 00:30:46.815 "method": "bdev_nvme_attach_controller" 00:30:46.815 },{ 00:30:46.815 "params": { 00:30:46.815 "name": "Nvme1", 00:30:46.815 "trtype": "tcp", 00:30:46.815 "traddr": "10.0.0.3", 00:30:46.815 "adrfam": "ipv4", 00:30:46.815 "trsvcid": "4420", 00:30:46.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.815 "hdgst": false, 00:30:46.815 "ddgst": false 00:30:46.815 }, 00:30:46.815 "method": "bdev_nvme_attach_controller" 00:30:46.815 }' 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:46.815 14:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.815 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:46.815 ... 00:30:46.815 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:46.815 ... 00:30:46.815 fio-3.35 00:30:46.815 Starting 4 threads 00:30:52.093 00:30:52.093 filename0: (groupid=0, jobs=1): err= 0: pid=109945: Wed Nov 6 14:00:32 2024 00:30:52.093 read: IOPS=2349, BW=18.4MiB/s (19.2MB/s)(91.8MiB/5001msec) 00:30:52.093 slat (nsec): min=5848, max=51916, avg=9578.42, stdev=3562.15 00:30:52.093 clat (usec): min=1361, max=5412, avg=3359.70, stdev=155.53 00:30:52.093 lat (usec): min=1368, max=5437, avg=3369.28, stdev=155.35 00:30:52.093 clat percentiles (usec): 00:30:52.093 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3261], 20.00th=[ 3294], 00:30:52.093 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:30:52.093 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3458], 95.00th=[ 3490], 00:30:52.093 | 99.00th=[ 3818], 99.50th=[ 4178], 99.90th=[ 4228], 99.95th=[ 5342], 00:30:52.093 | 99.99th=[ 5407] 00:30:52.093 bw ( KiB/s): min=18688, max=18944, per=25.03%, avg=18816.00, stdev=90.51, samples=9 00:30:52.093 iops : min= 2336, max= 2368, avg=2352.00, stdev=11.31, samples=9 00:30:52.093 lat (msec) : 2=0.07%, 4=99.01%, 10=0.92% 00:30:52.093 cpu : usr=90.86%, sys=7.92%, ctx=7, majf=0, minf=0 00:30:52.093 IO depths : 1=9.5%, 2=25.0%, 4=50.0%, 8=15.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.093 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.093 issued rwts: total=11752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.093 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:52.093 filename0: (groupid=0, jobs=1): err= 0: pid=109946: Wed Nov 6 14:00:32 2024 00:30:52.093 read: IOPS=2348, BW=18.3MiB/s (19.2MB/s)(91.8MiB/5002msec) 00:30:52.093 slat (nsec): min=5850, max=75753, avg=9856.87, stdev=3835.50 00:30:52.093 clat (usec): min=1586, max=6045, avg=3376.31, stdev=220.77 00:30:52.093 lat (usec): min=1607, max=6072, avg=3386.17, stdev=220.68 00:30:52.093 clat percentiles (usec): 00:30:52.093 | 1.00th=[ 2835], 5.00th=[ 2900], 10.00th=[ 3294], 20.00th=[ 3326], 00:30:52.093 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3392], 00:30:52.093 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3490], 95.00th=[ 3818], 00:30:52.093 | 99.00th=[ 3949], 99.50th=[ 4228], 99.90th=[ 4490], 99.95th=[ 5997], 00:30:52.093 | 99.99th=[ 6063] 00:30:52.093 bw ( KiB/s): min=18597, max=18944, per=25.01%, avg=18805.89, stdev=92.22, samples=9 00:30:52.093 iops : min= 2324, max= 2368, avg=2350.67, stdev=11.70, samples=9 00:30:52.093 lat (msec) : 2=0.08%, 4=99.34%, 10=0.58% 00:30:52.093 cpu : usr=92.48%, sys=6.50%, ctx=10, majf=0, minf=0 00:30:52.093 IO depths : 1=0.1%, 2=0.4%, 4=74.6%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.093 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.093 issued rwts: total=11747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.093 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:52.093 filename1: (groupid=0, jobs=1): err= 0: pid=109947: Wed Nov 6 14:00:32 2024 00:30:52.093 read: IOPS=2352, BW=18.4MiB/s (19.3MB/s)(91.9MiB/5003msec) 00:30:52.093 slat (nsec): min=5814, max=45775, avg=9238.30, stdev=3662.13 00:30:52.093 clat (usec): min=1013, max=4368, avg=3358.84, stdev=186.27 00:30:52.093 lat (usec): min=1026, max=4381, avg=3368.07, stdev=186.03 00:30:52.093 clat percentiles (usec): 00:30:52.093 | 1.00th=[ 2507], 5.00th=[ 3228], 10.00th=[ 3261], 20.00th=[ 3326], 00:30:52.093 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3392], 00:30:52.093 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3458], 95.00th=[ 3490], 00:30:52.093 | 99.00th=[ 4178], 99.50th=[ 4228], 99.90th=[ 4293], 99.95th=[ 4293], 00:30:52.093 | 99.99th=[ 4359] 00:30:52.093 bw ( KiB/s): min=18816, max=18944, per=25.06%, avg=18840.22, stdev=48.99, samples=9 00:30:52.093 iops : min= 2352, max= 2368, avg=2355.00, stdev= 6.08, samples=9 00:30:52.093 lat (msec) : 2=0.20%, 4=98.54%, 10=1.26% 00:30:52.093 cpu : usr=91.34%, sys=7.48%, ctx=6, majf=0, minf=0 00:30:52.093 IO depths : 1=7.3%, 2=25.0%, 4=50.0%, 8=17.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.094 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.094 issued rwts: total=11768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.094 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:52.094 filename1: (groupid=0, jobs=1): err= 0: pid=109948: Wed Nov 6 14:00:32 2024 00:30:52.094 read: IOPS=2349, BW=18.4MiB/s (19.2MB/s)(91.8MiB/5001msec) 00:30:52.094 slat (usec): min=5, max=109, avg= 7.85, stdev= 3.29 00:30:52.094 clat (usec): min=800, max=6038, avg=3366.25, stdev=353.45 00:30:52.094 lat (usec): min=822, max=6044, avg=3374.10, stdev=353.24 00:30:52.094 clat percentiles (usec): 00:30:52.094 | 1.00th=[ 2008], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3326], 00:30:52.094 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:30:52.094 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3425], 95.00th=[ 3458], 00:30:52.094 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 5604], 99.95th=[ 5604], 00:30:52.094 | 99.99th=[ 5669] 00:30:52.094 bw ( KiB/s): min=18688, max=18944, per=25.01%, avg=18805.89, stdev=93.08, samples=9 00:30:52.094 iops : min= 2336, max= 2368, avg=2350.67, stdev=11.70, samples=9 00:30:52.094 lat (usec) : 1000=0.08% 00:30:52.094 lat (msec) : 2=0.85%, 4=95.82%, 10=3.25% 00:30:52.094 cpu : usr=91.68%, sys=7.12%, ctx=55, majf=0, minf=0 00:30:52.094 IO depths : 1=6.4%, 2=25.0%, 4=50.0%, 8=18.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.094 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.094 issued rwts: total=11752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.094 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:52.094 00:30:52.094 Run status group 0 (all jobs): 00:30:52.094 READ: bw=73.4MiB/s (77.0MB/s), 18.3MiB/s-18.4MiB/s (19.2MB/s-19.3MB/s), io=367MiB (385MB), run=5001-5003msec 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 00:30:52.094 real 0m24.417s 00:30:52.094 user 2m4.549s 00:30:52.094 sys 0m7.632s 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 ************************************ 00:30:52.094 END TEST fio_dif_rand_params 00:30:52.094 ************************************ 00:30:52.094 14:00:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:52.094 14:00:32 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:52.094 14:00:32 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 ************************************ 00:30:52.094 START TEST fio_dif_digest 00:30:52.094 ************************************ 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 bdev_null0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 [2024-11-06 14:00:32.953421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:52.094 { 00:30:52.094 "params": { 00:30:52.094 "name": "Nvme$subsystem", 00:30:52.094 "trtype": "$TEST_TRANSPORT", 00:30:52.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.094 "adrfam": "ipv4", 00:30:52.094 "trsvcid": "$NVMF_PORT", 00:30:52.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.094 "hdgst": ${hdgst:-false}, 00:30:52.094 "ddgst": ${ddgst:-false} 00:30:52.094 }, 00:30:52.094 "method": "bdev_nvme_attach_controller" 00:30:52.094 } 00:30:52.094 EOF 00:30:52.094 )") 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:30:52.094 14:00:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:52.094 "params": { 00:30:52.094 "name": "Nvme0", 00:30:52.094 "trtype": "tcp", 00:30:52.095 "traddr": "10.0.0.3", 00:30:52.095 "adrfam": "ipv4", 00:30:52.095 "trsvcid": "4420", 00:30:52.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.095 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.095 "hdgst": true, 00:30:52.095 "ddgst": true 00:30:52.095 }, 00:30:52.095 "method": "bdev_nvme_attach_controller" 00:30:52.095 }' 00:30:52.095 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:52.095 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:52.095 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.095 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:52.095 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:52.095 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:52.354 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:52.354 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:52.354 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:52.354 14:00:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.354 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:52.354 ... 00:30:52.354 fio-3.35 00:30:52.354 Starting 3 threads 00:31:04.602 00:31:04.602 filename0: (groupid=0, jobs=1): err= 0: pid=110055: Wed Nov 6 14:00:43 2024 00:31:04.603 read: IOPS=241, BW=30.1MiB/s (31.6MB/s)(302MiB/10005msec) 00:31:04.603 slat (nsec): min=6619, max=52899, avg=25278.38, stdev=6674.44 00:31:04.603 clat (usec): min=3589, max=31957, avg=12415.20, stdev=2343.89 00:31:04.603 lat (usec): min=3597, max=31981, avg=12440.48, stdev=2345.76 00:31:04.603 clat percentiles (usec): 00:31:04.603 | 1.00th=[ 4146], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[10552], 00:31:04.603 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:31:04.603 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:31:04.603 | 99.00th=[15533], 99.50th=[16057], 99.90th=[25297], 99.95th=[30016], 00:31:04.603 | 99.99th=[31851] 00:31:04.603 bw ( KiB/s): min=28416, max=35584, per=30.54%, avg=30834.74, stdev=1767.28, samples=19 00:31:04.603 iops : min= 222, max= 278, avg=240.84, stdev=13.81, samples=19 00:31:04.603 lat (msec) : 4=0.91%, 10=18.08%, 20=80.85%, 50=0.17% 00:31:04.603 cpu : usr=93.38%, sys=5.17%, ctx=18, majf=0, minf=0 00:31:04.603 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:04.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.603 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.603 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:04.603 filename0: (groupid=0, jobs=1): err= 0: pid=110056: Wed Nov 6 14:00:43 2024 00:31:04.603 read: IOPS=284, BW=35.5MiB/s (37.3MB/s)(356MiB/10005msec) 00:31:04.603 slat (nsec): min=6023, max=39045, avg=16316.60, stdev=6670.00 00:31:04.603 clat (usec): min=5274, max=54153, avg=10529.42, stdev=2683.48 00:31:04.603 lat (usec): min=5282, max=54178, avg=10545.74, stdev=2682.89 00:31:04.603 clat percentiles (usec): 00:31:04.603 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 9372], 00:31:04.603 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:31:04.603 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:31:04.603 | 99.00th=[13304], 99.50th=[13566], 99.90th=[53216], 99.95th=[53216], 00:31:04.603 | 99.99th=[54264] 00:31:04.603 bw ( KiB/s): min=32512, max=40878, per=35.98%, avg=36317.11, stdev=2062.23, samples=19 00:31:04.603 iops : min= 254, max= 319, avg=283.68, stdev=16.09, samples=19 00:31:04.603 lat (msec) : 10=24.39%, 20=75.29%, 50=0.11%, 100=0.21% 00:31:04.603 cpu : usr=93.12%, sys=5.64%, ctx=15, majf=0, minf=0 00:31:04.603 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:04.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.603 issued rwts: total=2845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.603 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:04.603 filename0: (groupid=0, jobs=1): err= 0: pid=110057: Wed Nov 6 14:00:43 2024 00:31:04.603 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(329MiB/10007msec) 00:31:04.603 slat (nsec): min=6075, max=44481, avg=15555.25, stdev=6249.19 00:31:04.603 clat (usec): min=4770, max=91416, avg=11371.45, stdev=7863.79 00:31:04.603 lat (usec): min=4787, max=91424, avg=11387.00, stdev=7863.82 00:31:04.603 clat percentiles (usec): 00:31:04.603 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:31:04.603 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:31:04.603 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11338], 00:31:04.603 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[91751], 00:31:04.603 | 99.99th=[91751] 00:31:04.603 bw ( KiB/s): min=25037, max=39168, per=33.46%, avg=33775.84, stdev=3884.55, samples=19 00:31:04.603 iops : min= 195, max= 306, avg=263.84, stdev=30.42, samples=19 00:31:04.603 lat (msec) : 10=53.24%, 20=43.07%, 50=0.91%, 100=2.77% 00:31:04.603 cpu : usr=93.47%, sys=5.38%, ctx=53, majf=0, minf=0 00:31:04.603 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:04.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.603 issued rwts: total=2635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.603 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:04.603 00:31:04.603 Run status group 0 (all jobs): 00:31:04.603 READ: bw=98.6MiB/s (103MB/s), 30.1MiB/s-35.5MiB/s (31.6MB/s-37.3MB/s), io=987MiB (1034MB), run=10005-10007msec 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.603 00:31:04.603 real 0m11.206s 00:31:04.603 user 0m28.824s 00:31:04.603 sys 0m2.008s 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:04.603 14:00:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:04.603 ************************************ 00:31:04.603 END TEST fio_dif_digest 00:31:04.603 ************************************ 00:31:04.603 14:00:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:04.603 14:00:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.603 rmmod nvme_tcp 00:31:04.603 rmmod nvme_fabrics 00:31:04.603 rmmod nvme_keyring 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 109282 ']' 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 109282 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 109282 ']' 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 109282 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 109282 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:04.603 killing process with pid 109282 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 109282' 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@971 -- # kill 109282 00:31:04.603 14:00:44 nvmf_dif -- common/autotest_common.sh@976 -- # wait 109282 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:31:04.603 14:00:44 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:04.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:04.603 Waiting for block devices as requested 00:31:04.603 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.603 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:04.603 14:00:45 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:04.861 14:00:45 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.861 14:00:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.861 14:00:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.120 14:00:45 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:31:05.120 00:31:05.120 real 1m2.651s 00:31:05.120 user 3m49.421s 00:31:05.120 sys 0m21.581s 00:31:05.120 14:00:45 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:05.120 ************************************ 00:31:05.120 END TEST nvmf_dif 00:31:05.120 ************************************ 00:31:05.120 14:00:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.120 14:00:45 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:05.120 14:00:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:05.120 14:00:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:05.120 14:00:45 -- common/autotest_common.sh@10 -- # set +x 00:31:05.120 ************************************ 00:31:05.120 START TEST nvmf_abort_qd_sizes 00:31:05.120 ************************************ 00:31:05.120 14:00:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:05.120 * Looking for test storage... 00:31:05.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:05.120 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:05.120 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:31:05.120 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:05.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.380 --rc genhtml_branch_coverage=1 00:31:05.380 --rc genhtml_function_coverage=1 00:31:05.380 --rc genhtml_legend=1 00:31:05.380 --rc geninfo_all_blocks=1 00:31:05.380 --rc geninfo_unexecuted_blocks=1 00:31:05.380 00:31:05.380 ' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:05.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.380 --rc genhtml_branch_coverage=1 00:31:05.380 --rc genhtml_function_coverage=1 00:31:05.380 --rc genhtml_legend=1 00:31:05.380 --rc geninfo_all_blocks=1 00:31:05.380 --rc geninfo_unexecuted_blocks=1 00:31:05.380 00:31:05.380 ' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:05.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.380 --rc genhtml_branch_coverage=1 00:31:05.380 --rc genhtml_function_coverage=1 00:31:05.380 --rc genhtml_legend=1 00:31:05.380 --rc geninfo_all_blocks=1 00:31:05.380 --rc geninfo_unexecuted_blocks=1 00:31:05.380 00:31:05.380 ' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:05.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.380 --rc genhtml_branch_coverage=1 00:31:05.380 --rc genhtml_function_coverage=1 00:31:05.380 --rc genhtml_legend=1 00:31:05.380 --rc geninfo_all_blocks=1 00:31:05.380 --rc geninfo_unexecuted_blocks=1 00:31:05.380 00:31:05.380 ' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:05.380 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:05.380 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:05.381 Cannot find device "nvmf_init_br" 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:05.381 Cannot find device "nvmf_init_br2" 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:05.381 Cannot find device "nvmf_tgt_br" 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:05.381 Cannot find device "nvmf_tgt_br2" 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:05.381 Cannot find device "nvmf_init_br" 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:05.381 Cannot find device "nvmf_init_br2" 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:31:05.381 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:05.640 Cannot find device "nvmf_tgt_br" 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:05.640 Cannot find device "nvmf_tgt_br2" 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:05.640 Cannot find device "nvmf_br" 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:05.640 Cannot find device "nvmf_init_if" 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:05.640 Cannot find device "nvmf_init_if2" 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:05.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:05.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:05.640 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:05.641 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:05.641 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:05.641 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:05.641 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:05.641 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:05.641 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:05.641 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:05.641 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:05.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:05.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:31:05.900 00:31:05.900 --- 10.0.0.3 ping statistics --- 00:31:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.900 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:05.900 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:05.900 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:31:05.900 00:31:05.900 --- 10.0.0.4 ping statistics --- 00:31:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.900 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:05.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:31:05.900 00:31:05.900 --- 10.0.0.1 ping statistics --- 00:31:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.900 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:05.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:31:05.900 00:31:05.900 --- 10.0.0.2 ping statistics --- 00:31:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.900 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:05.900 14:00:46 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:06.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:06.835 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:06.835 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:07.093 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=110711 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 110711 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 110711 ']' 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:07.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:07.094 14:00:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:07.094 [2024-11-06 14:00:47.908217] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:31:07.094 [2024-11-06 14:00:47.908292] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.352 [2024-11-06 14:00:48.059674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:07.352 [2024-11-06 14:00:48.139592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.352 [2024-11-06 14:00:48.139661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.352 [2024-11-06 14:00:48.139672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.352 [2024-11-06 14:00:48.139680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.352 [2024-11-06 14:00:48.139687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.352 [2024-11-06 14:00:48.141066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.352 [2024-11-06 14:00:48.141260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.352 [2024-11-06 14:00:48.141334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.352 [2024-11-06 14:00:48.141337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:31:07.919 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:08.179 14:00:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:08.179 ************************************ 00:31:08.179 START TEST spdk_target_abort 00:31:08.179 ************************************ 00:31:08.179 14:00:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:31:08.179 14:00:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:08.179 14:00:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:31:08.179 14:00:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.179 14:00:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.179 spdk_targetn1 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.179 [2024-11-06 14:00:49.011586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.179 [2024-11-06 14:00:49.064018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:08.179 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:08.180 14:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:11.469 Initializing NVMe Controllers 00:31:11.469 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:31:11.469 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:11.469 Initialization complete. Launching workers. 00:31:11.469 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13248, failed: 0 00:31:11.469 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1116, failed to submit 12132 00:31:11.469 success 755, unsuccessful 361, failed 0 00:31:11.469 14:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:11.469 14:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:14.758 Initializing NVMe Controllers 00:31:14.758 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:31:14.758 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:14.758 Initialization complete. Launching workers. 00:31:14.758 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6021, failed: 0 00:31:14.758 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 4762 00:31:14.758 success 234, unsuccessful 1025, failed 0 00:31:15.017 14:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:15.017 14:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:18.305 Initializing NVMe Controllers 00:31:18.305 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:31:18.305 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:18.305 Initialization complete. Launching workers. 00:31:18.305 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32903, failed: 0 00:31:18.305 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2572, failed to submit 30331 00:31:18.305 success 495, unsuccessful 2077, failed 0 00:31:18.305 14:00:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:18.305 14:00:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.305 14:00:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:18.305 14:00:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.305 14:00:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:18.305 14:00:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.305 14:00:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 110711 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 110711 ']' 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 110711 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 110711 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:19.242 killing process with pid 110711 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 110711' 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 110711 00:31:19.242 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 110711 00:31:19.509 ************************************ 00:31:19.509 END TEST spdk_target_abort 00:31:19.509 ************************************ 00:31:19.509 00:31:19.509 real 0m11.393s 00:31:19.509 user 0m45.669s 00:31:19.509 sys 0m2.325s 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.509 14:01:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:19.509 14:01:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:19.509 14:01:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:19.509 14:01:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.509 ************************************ 00:31:19.509 START TEST kernel_target_abort 00:31:19.509 ************************************ 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:19.509 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:19.776 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:19.776 14:01:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:20.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:20.294 Waiting for block devices as requested 00:31:20.294 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:20.294 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:20.553 No valid GPT data, bailing 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:20.553 No valid GPT data, bailing 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:31:20.553 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:20.812 No valid GPT data, bailing 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:20.812 No valid GPT data, bailing 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce --hostid=f9261117-b8af-4744-8c64-783a612116ce -a 10.0.0.1 -t tcp -s 4420 00:31:20.812 00:31:20.812 Discovery Log Number of Records 2, Generation counter 2 00:31:20.812 =====Discovery Log Entry 0====== 00:31:20.812 trtype: tcp 00:31:20.812 adrfam: ipv4 00:31:20.812 subtype: current discovery subsystem 00:31:20.812 treq: not specified, sq flow control disable supported 00:31:20.812 portid: 1 00:31:20.812 trsvcid: 4420 00:31:20.812 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:20.812 traddr: 10.0.0.1 00:31:20.812 eflags: none 00:31:20.812 sectype: none 00:31:20.812 =====Discovery Log Entry 1====== 00:31:20.812 trtype: tcp 00:31:20.812 adrfam: ipv4 00:31:20.812 subtype: nvme subsystem 00:31:20.812 treq: not specified, sq flow control disable supported 00:31:20.812 portid: 1 00:31:20.812 trsvcid: 4420 00:31:20.812 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:20.812 traddr: 10.0.0.1 00:31:20.812 eflags: none 00:31:20.812 sectype: none 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:20.812 14:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.097 Initializing NVMe Controllers 00:31:24.097 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:24.097 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:24.097 Initialization complete. Launching workers. 00:31:24.097 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38842, failed: 0 00:31:24.097 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38842, failed to submit 0 00:31:24.097 success 0, unsuccessful 38842, failed 0 00:31:24.097 14:01:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:24.097 14:01:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:27.381 Initializing NVMe Controllers 00:31:27.381 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:27.381 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:27.381 Initialization complete. Launching workers. 00:31:27.381 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72997, failed: 0 00:31:27.381 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37619, failed to submit 35378 00:31:27.381 success 0, unsuccessful 37619, failed 0 00:31:27.381 14:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:27.381 14:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:30.674 Initializing NVMe Controllers 00:31:30.674 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:30.674 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:30.674 Initialization complete. Launching workers. 00:31:30.674 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96903, failed: 0 00:31:30.674 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24168, failed to submit 72735 00:31:30.674 success 0, unsuccessful 24168, failed 0 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:30.674 14:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:31.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:33.774 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:34.032 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:34.032 00:31:34.032 real 0m14.395s 00:31:34.032 user 0m6.261s 00:31:34.032 sys 0m5.448s 00:31:34.032 14:01:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:34.032 ************************************ 00:31:34.032 END TEST kernel_target_abort 00:31:34.032 ************************************ 00:31:34.032 14:01:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:34.032 rmmod nvme_tcp 00:31:34.032 rmmod nvme_fabrics 00:31:34.032 rmmod nvme_keyring 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:34.032 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:31:34.291 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:31:34.291 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 110711 ']' 00:31:34.291 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 110711 00:31:34.291 14:01:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 110711 ']' 00:31:34.291 14:01:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 110711 00:31:34.291 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (110711) - No such process 00:31:34.291 Process with pid 110711 is not found 00:31:34.291 14:01:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 110711 is not found' 00:31:34.291 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:31:34.291 14:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:34.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:34.859 Waiting for block devices as requested 00:31:34.859 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:34.859 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:35.118 14:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:35.118 14:01:16 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:35.118 14:01:16 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:35.376 14:01:16 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:35.376 14:01:16 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:35.376 14:01:16 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.376 14:01:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:35.376 14:01:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.376 14:01:16 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:31:35.376 00:31:35.376 real 0m30.247s 00:31:35.376 user 0m53.404s 00:31:35.376 sys 0m9.957s 00:31:35.376 14:01:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:35.376 14:01:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:35.376 ************************************ 00:31:35.376 END TEST nvmf_abort_qd_sizes 00:31:35.376 ************************************ 00:31:35.376 14:01:16 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:35.376 14:01:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:35.376 14:01:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:35.376 14:01:16 -- common/autotest_common.sh@10 -- # set +x 00:31:35.377 ************************************ 00:31:35.377 START TEST keyring_file 00:31:35.377 ************************************ 00:31:35.377 14:01:16 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:35.636 * Looking for test storage... 00:31:35.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@345 -- # : 1 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@353 -- # local d=1 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@355 -- # echo 1 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@353 -- # local d=2 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@355 -- # echo 2 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.636 14:01:16 keyring_file -- scripts/common.sh@368 -- # return 0 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:35.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.636 --rc genhtml_branch_coverage=1 00:31:35.636 --rc genhtml_function_coverage=1 00:31:35.636 --rc genhtml_legend=1 00:31:35.636 --rc geninfo_all_blocks=1 00:31:35.636 --rc geninfo_unexecuted_blocks=1 00:31:35.636 00:31:35.636 ' 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:35.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.636 --rc genhtml_branch_coverage=1 00:31:35.636 --rc genhtml_function_coverage=1 00:31:35.636 --rc genhtml_legend=1 00:31:35.636 --rc geninfo_all_blocks=1 00:31:35.636 --rc geninfo_unexecuted_blocks=1 00:31:35.636 00:31:35.636 ' 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:35.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.636 --rc genhtml_branch_coverage=1 00:31:35.636 --rc genhtml_function_coverage=1 00:31:35.636 --rc genhtml_legend=1 00:31:35.636 --rc geninfo_all_blocks=1 00:31:35.636 --rc geninfo_unexecuted_blocks=1 00:31:35.636 00:31:35.636 ' 00:31:35.636 14:01:16 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:35.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.636 --rc genhtml_branch_coverage=1 00:31:35.636 --rc genhtml_function_coverage=1 00:31:35.636 --rc genhtml_legend=1 00:31:35.636 --rc geninfo_all_blocks=1 00:31:35.636 --rc geninfo_unexecuted_blocks=1 00:31:35.636 00:31:35.636 ' 00:31:35.636 14:01:16 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:35.636 14:01:16 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:31:35.636 14:01:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:35.637 14:01:16 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.637 14:01:16 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.637 14:01:16 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.637 14:01:16 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.637 14:01:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.637 14:01:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.637 14:01:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.637 14:01:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:35.637 14:01:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@51 -- # : 0 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:35.637 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:35.637 14:01:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:35.637 14:01:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:35.637 14:01:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:35.637 14:01:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:35.637 14:01:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:35.637 14:01:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lRp00GjBSt 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:31:35.637 14:01:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lRp00GjBSt 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lRp00GjBSt 00:31:35.637 14:01:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lRp00GjBSt 00:31:35.637 14:01:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:35.637 14:01:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:35.897 14:01:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.c3rQrgzTYw 00:31:35.897 14:01:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:35.897 14:01:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:35.897 14:01:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:31:35.897 14:01:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:35.897 14:01:16 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:31:35.897 14:01:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:31:35.897 14:01:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:31:35.897 14:01:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.c3rQrgzTYw 00:31:35.897 14:01:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.c3rQrgzTYw 00:31:35.897 14:01:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.c3rQrgzTYw 00:31:35.897 14:01:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=111650 00:31:35.897 14:01:16 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:35.897 14:01:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 111650 00:31:35.897 14:01:16 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 111650 ']' 00:31:35.897 14:01:16 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.897 14:01:16 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:35.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.897 14:01:16 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.897 14:01:16 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:35.897 14:01:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:35.897 [2024-11-06 14:01:16.690040] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:31:35.897 [2024-11-06 14:01:16.690746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111650 ] 00:31:36.155 [2024-11-06 14:01:16.845039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.155 [2024-11-06 14:01:16.908976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:31:36.722 14:01:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:36.722 [2024-11-06 14:01:17.583048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.722 null0 00:31:36.722 [2024-11-06 14:01:17.614979] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:36.722 [2024-11-06 14:01:17.615166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.722 14:01:17 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.722 14:01:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:36.722 [2024-11-06 14:01:17.646906] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:36.722 2024/11/06 14:01:17 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:31:36.722 request: 00:31:36.722 { 00:31:36.722 "method": "nvmf_subsystem_add_listener", 00:31:36.722 "params": { 00:31:36.722 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.722 "secure_channel": false, 00:31:36.722 "listen_address": { 00:31:36.722 "trtype": "tcp", 00:31:36.722 "traddr": "127.0.0.1", 00:31:36.981 "trsvcid": "4420" 00:31:36.981 } 00:31:36.981 } 00:31:36.981 } 00:31:36.981 Got JSON-RPC error response 00:31:36.981 GoRPCClient: error on JSON-RPC call 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:36.981 14:01:17 keyring_file -- keyring/file.sh@47 -- # bperfpid=111685 00:31:36.981 14:01:17 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:36.981 14:01:17 keyring_file -- keyring/file.sh@49 -- # waitforlisten 111685 /var/tmp/bperf.sock 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 111685 ']' 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:36.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:36.981 14:01:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:36.981 [2024-11-06 14:01:17.709441] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:31:36.981 [2024-11-06 14:01:17.709510] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111685 ] 00:31:36.981 [2024-11-06 14:01:17.860164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.239 [2024-11-06 14:01:17.930461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.805 14:01:18 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:37.805 14:01:18 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:31:37.805 14:01:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lRp00GjBSt 00:31:37.805 14:01:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lRp00GjBSt 00:31:38.062 14:01:18 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.c3rQrgzTYw 00:31:38.062 14:01:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.c3rQrgzTYw 00:31:38.320 14:01:19 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:31:38.320 14:01:19 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:38.320 14:01:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.320 14:01:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:38.320 14:01:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:38.578 14:01:19 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.lRp00GjBSt == \/\t\m\p\/\t\m\p\.\l\R\p\0\0\G\j\B\S\t ]] 00:31:38.578 14:01:19 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:31:38.578 14:01:19 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:31:38.578 14:01:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:38.578 14:01:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.578 14:01:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:38.578 14:01:19 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.c3rQrgzTYw == \/\t\m\p\/\t\m\p\.\c\3\r\Q\r\g\z\T\Y\w ]] 00:31:38.578 14:01:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:31:38.578 14:01:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:38.578 14:01:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:38.578 14:01:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.578 14:01:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:38.578 14:01:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:38.836 14:01:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:38.836 14:01:19 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:31:38.836 14:01:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:38.836 14:01:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:38.836 14:01:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.836 14:01:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:38.836 14:01:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:39.095 14:01:19 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:31:39.095 14:01:19 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:39.095 14:01:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:39.353 [2024-11-06 14:01:20.166359] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:39.353 nvme0n1 00:31:39.353 14:01:20 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:31:39.353 14:01:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:39.353 14:01:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:39.353 14:01:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:39.353 14:01:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:39.353 14:01:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:39.615 14:01:20 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:31:39.615 14:01:20 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:31:39.615 14:01:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:39.615 14:01:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:39.615 14:01:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:39.615 14:01:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:39.615 14:01:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:39.882 14:01:20 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:31:39.882 14:01:20 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:39.882 Running I/O for 1 seconds... 00:31:41.258 13751.00 IOPS, 53.71 MiB/s 00:31:41.258 Latency(us) 00:31:41.258 [2024-11-06T14:01:22.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.258 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:41.258 nvme0n1 : 1.01 13787.22 53.86 0.00 0.00 9254.92 6527.28 19266.00 00:31:41.258 [2024-11-06T14:01:22.191Z] =================================================================================================================== 00:31:41.258 [2024-11-06T14:01:22.191Z] Total : 13787.22 53.86 0.00 0.00 9254.92 6527.28 19266.00 00:31:41.258 { 00:31:41.258 "results": [ 00:31:41.258 { 00:31:41.258 "job": "nvme0n1", 00:31:41.258 "core_mask": "0x2", 00:31:41.258 "workload": "randrw", 00:31:41.258 "percentage": 50, 00:31:41.258 "status": "finished", 00:31:41.258 "queue_depth": 128, 00:31:41.258 "io_size": 4096, 00:31:41.258 "runtime": 1.006657, 00:31:41.258 "iops": 13787.218486535136, 00:31:41.258 "mibps": 53.85632221302787, 00:31:41.258 "io_failed": 0, 00:31:41.258 "io_timeout": 0, 00:31:41.258 "avg_latency_us": 9254.91896381549, 00:31:41.258 "min_latency_us": 6527.28032128514, 00:31:41.258 "max_latency_us": 19266.00481927711 00:31:41.258 } 00:31:41.258 ], 00:31:41.258 "core_count": 1 00:31:41.258 } 00:31:41.258 14:01:21 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:41.258 14:01:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:41.258 14:01:22 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:31:41.258 14:01:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:41.258 14:01:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:41.258 14:01:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:41.258 14:01:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:41.258 14:01:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:41.517 14:01:22 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:41.517 14:01:22 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:31:41.517 14:01:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:41.517 14:01:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:41.517 14:01:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:41.517 14:01:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:41.517 14:01:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:41.775 14:01:22 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:31:41.775 14:01:22 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:41.775 14:01:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:41.775 14:01:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:41.775 14:01:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:41.775 14:01:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.775 14:01:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:41.775 14:01:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.775 14:01:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:41.775 14:01:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:42.033 [2024-11-06 14:01:22.751224] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:42.033 [2024-11-06 14:01:22.751609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6510 (107): Transport endpoint is not connected 00:31:42.033 [2024-11-06 14:01:22.752594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c6510 (9): Bad file descriptor 00:31:42.033 [2024-11-06 14:01:22.753593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:31:42.033 [2024-11-06 14:01:22.753615] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:42.033 [2024-11-06 14:01:22.753627] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:42.033 [2024-11-06 14:01:22.753639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:31:42.034 2024/11/06 14:01:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:42.034 request: 00:31:42.034 { 00:31:42.034 "method": "bdev_nvme_attach_controller", 00:31:42.034 "params": { 00:31:42.034 "name": "nvme0", 00:31:42.034 "trtype": "tcp", 00:31:42.034 "traddr": "127.0.0.1", 00:31:42.034 "adrfam": "ipv4", 00:31:42.034 "trsvcid": "4420", 00:31:42.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.034 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.034 "prchk_reftag": false, 00:31:42.034 "prchk_guard": false, 00:31:42.034 "hdgst": false, 00:31:42.034 "ddgst": false, 00:31:42.034 "psk": "key1", 00:31:42.034 "allow_unrecognized_csi": false 00:31:42.034 } 00:31:42.034 } 00:31:42.034 Got JSON-RPC error response 00:31:42.034 GoRPCClient: error on JSON-RPC call 00:31:42.034 14:01:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:42.034 14:01:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:42.034 14:01:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:42.034 14:01:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:42.034 14:01:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:31:42.034 14:01:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:42.034 14:01:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:42.034 14:01:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:42.034 14:01:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:42.034 14:01:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:42.292 14:01:23 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:42.292 14:01:23 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:31:42.292 14:01:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:42.292 14:01:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:42.292 14:01:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:42.292 14:01:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:42.292 14:01:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:42.550 14:01:23 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:31:42.550 14:01:23 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:31:42.550 14:01:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:42.550 14:01:23 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:31:42.550 14:01:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:42.809 14:01:23 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:31:42.809 14:01:23 keyring_file -- keyring/file.sh@78 -- # jq length 00:31:42.809 14:01:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:43.068 14:01:23 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:31:43.068 14:01:23 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.lRp00GjBSt 00:31:43.068 14:01:23 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lRp00GjBSt 00:31:43.068 14:01:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:43.068 14:01:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lRp00GjBSt 00:31:43.068 14:01:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:43.068 14:01:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.068 14:01:23 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:43.068 14:01:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.068 14:01:23 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lRp00GjBSt 00:31:43.068 14:01:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lRp00GjBSt 00:31:43.327 [2024-11-06 14:01:24.114760] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lRp00GjBSt': 0100660 00:31:43.327 [2024-11-06 14:01:24.114806] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:43.327 2024/11/06 14:01:24 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.lRp00GjBSt], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:31:43.327 request: 00:31:43.327 { 00:31:43.327 "method": "keyring_file_add_key", 00:31:43.327 "params": { 00:31:43.327 "name": "key0", 00:31:43.327 "path": "/tmp/tmp.lRp00GjBSt" 00:31:43.327 } 00:31:43.327 } 00:31:43.327 Got JSON-RPC error response 00:31:43.327 GoRPCClient: error on JSON-RPC call 00:31:43.327 14:01:24 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:43.327 14:01:24 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:43.327 14:01:24 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:43.327 14:01:24 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:43.327 14:01:24 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.lRp00GjBSt 00:31:43.327 14:01:24 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lRp00GjBSt 00:31:43.327 14:01:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lRp00GjBSt 00:31:43.585 14:01:24 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.lRp00GjBSt 00:31:43.585 14:01:24 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:31:43.585 14:01:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:43.585 14:01:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:43.585 14:01:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:43.585 14:01:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:43.585 14:01:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:43.844 14:01:24 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:31:43.844 14:01:24 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:43.844 14:01:24 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:43.844 14:01:24 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:43.844 14:01:24 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:43.844 14:01:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.844 14:01:24 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:43.844 14:01:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.844 14:01:24 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:43.844 14:01:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:44.103 [2024-11-06 14:01:24.801782] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lRp00GjBSt': No such file or directory 00:31:44.103 [2024-11-06 14:01:24.801822] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:44.103 [2024-11-06 14:01:24.801842] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:44.103 [2024-11-06 14:01:24.801853] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:31:44.103 [2024-11-06 14:01:24.801871] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:44.103 [2024-11-06 14:01:24.801880] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:44.103 2024/11/06 14:01:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:31:44.103 request: 00:31:44.103 { 00:31:44.103 "method": "bdev_nvme_attach_controller", 00:31:44.103 "params": { 00:31:44.103 "name": "nvme0", 00:31:44.103 "trtype": "tcp", 00:31:44.103 "traddr": "127.0.0.1", 00:31:44.103 "adrfam": "ipv4", 00:31:44.103 "trsvcid": "4420", 00:31:44.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:44.103 "prchk_reftag": false, 00:31:44.103 "prchk_guard": false, 00:31:44.103 "hdgst": false, 00:31:44.103 "ddgst": false, 00:31:44.103 "psk": "key0", 00:31:44.103 "allow_unrecognized_csi": false 00:31:44.103 } 00:31:44.103 } 00:31:44.103 Got JSON-RPC error response 00:31:44.103 GoRPCClient: error on JSON-RPC call 00:31:44.103 14:01:24 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:44.103 14:01:24 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:44.103 14:01:24 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:44.103 14:01:24 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:44.103 14:01:24 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:31:44.103 14:01:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:44.362 14:01:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.y0Mk8X2pOL 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:44.362 14:01:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:44.362 14:01:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:31:44.362 14:01:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:44.362 14:01:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:31:44.362 14:01:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:31:44.362 14:01:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.y0Mk8X2pOL 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.y0Mk8X2pOL 00:31:44.362 14:01:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.y0Mk8X2pOL 00:31:44.362 14:01:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y0Mk8X2pOL 00:31:44.362 14:01:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y0Mk8X2pOL 00:31:44.621 14:01:25 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:44.621 14:01:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:44.880 nvme0n1 00:31:44.880 14:01:25 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:31:44.880 14:01:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:44.880 14:01:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:44.880 14:01:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:44.880 14:01:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.880 14:01:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:45.138 14:01:25 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:31:45.138 14:01:25 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:31:45.138 14:01:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:45.138 14:01:26 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:31:45.138 14:01:26 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:31:45.138 14:01:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:45.138 14:01:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:45.138 14:01:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:45.397 14:01:26 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:31:45.397 14:01:26 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:31:45.397 14:01:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:45.397 14:01:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:45.397 14:01:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:45.397 14:01:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:45.397 14:01:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:45.656 14:01:26 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:31:45.656 14:01:26 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:45.656 14:01:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:45.915 14:01:26 keyring_file -- keyring/file.sh@105 -- # jq length 00:31:45.915 14:01:26 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:31:45.915 14:01:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:46.173 14:01:26 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:31:46.173 14:01:26 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y0Mk8X2pOL 00:31:46.173 14:01:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y0Mk8X2pOL 00:31:46.431 14:01:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.c3rQrgzTYw 00:31:46.431 14:01:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.c3rQrgzTYw 00:31:46.689 14:01:27 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:46.689 14:01:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:46.948 nvme0n1 00:31:46.948 14:01:27 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:31:46.948 14:01:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:47.207 14:01:27 keyring_file -- keyring/file.sh@113 -- # config='{ 00:31:47.207 "subsystems": [ 00:31:47.207 { 00:31:47.207 "subsystem": "keyring", 00:31:47.207 "config": [ 00:31:47.207 { 00:31:47.207 "method": "keyring_file_add_key", 00:31:47.207 "params": { 00:31:47.207 "name": "key0", 00:31:47.207 "path": "/tmp/tmp.y0Mk8X2pOL" 00:31:47.207 } 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "method": "keyring_file_add_key", 00:31:47.207 "params": { 00:31:47.207 "name": "key1", 00:31:47.207 "path": "/tmp/tmp.c3rQrgzTYw" 00:31:47.207 } 00:31:47.207 } 00:31:47.207 ] 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "subsystem": "iobuf", 00:31:47.207 "config": [ 00:31:47.207 { 00:31:47.207 "method": "iobuf_set_options", 00:31:47.207 "params": { 00:31:47.207 "enable_numa": false, 00:31:47.207 "large_bufsize": 135168, 00:31:47.207 "large_pool_count": 1024, 00:31:47.207 "small_bufsize": 8192, 00:31:47.207 "small_pool_count": 8192 00:31:47.207 } 00:31:47.207 } 00:31:47.207 ] 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "subsystem": "sock", 00:31:47.207 "config": [ 00:31:47.207 { 00:31:47.207 "method": "sock_set_default_impl", 00:31:47.207 "params": { 00:31:47.207 "impl_name": "posix" 00:31:47.207 } 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "method": "sock_impl_set_options", 00:31:47.207 "params": { 00:31:47.207 "enable_ktls": false, 00:31:47.207 "enable_placement_id": 0, 00:31:47.207 "enable_quickack": false, 00:31:47.207 "enable_recv_pipe": true, 00:31:47.207 "enable_zerocopy_send_client": false, 00:31:47.207 "enable_zerocopy_send_server": true, 00:31:47.207 "impl_name": "ssl", 00:31:47.207 "recv_buf_size": 4096, 00:31:47.207 "send_buf_size": 4096, 00:31:47.207 "tls_version": 0, 00:31:47.207 "zerocopy_threshold": 0 00:31:47.207 } 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "method": "sock_impl_set_options", 00:31:47.207 "params": { 00:31:47.207 "enable_ktls": false, 00:31:47.207 "enable_placement_id": 0, 00:31:47.207 "enable_quickack": false, 00:31:47.207 "enable_recv_pipe": true, 00:31:47.207 "enable_zerocopy_send_client": false, 00:31:47.207 "enable_zerocopy_send_server": true, 00:31:47.207 "impl_name": "posix", 00:31:47.207 "recv_buf_size": 2097152, 00:31:47.207 "send_buf_size": 2097152, 00:31:47.207 "tls_version": 0, 00:31:47.207 "zerocopy_threshold": 0 00:31:47.207 } 00:31:47.207 } 00:31:47.207 ] 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "subsystem": "vmd", 00:31:47.207 "config": [] 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "subsystem": "accel", 00:31:47.207 "config": [ 00:31:47.207 { 00:31:47.207 "method": "accel_set_options", 00:31:47.207 "params": { 00:31:47.207 "buf_count": 2048, 00:31:47.207 "large_cache_size": 16, 00:31:47.207 "sequence_count": 2048, 00:31:47.207 "small_cache_size": 128, 00:31:47.207 "task_count": 2048 00:31:47.207 } 00:31:47.207 } 00:31:47.207 ] 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "subsystem": "bdev", 00:31:47.207 "config": [ 00:31:47.207 { 00:31:47.207 "method": "bdev_set_options", 00:31:47.207 "params": { 00:31:47.207 "bdev_auto_examine": true, 00:31:47.207 "bdev_io_cache_size": 256, 00:31:47.207 "bdev_io_pool_size": 65535, 00:31:47.207 "iobuf_large_cache_size": 16, 00:31:47.207 "iobuf_small_cache_size": 128 00:31:47.207 } 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "method": "bdev_raid_set_options", 00:31:47.207 "params": { 00:31:47.207 "process_max_bandwidth_mb_sec": 0, 00:31:47.207 "process_window_size_kb": 1024 00:31:47.207 } 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "method": "bdev_iscsi_set_options", 00:31:47.207 "params": { 00:31:47.207 "timeout_sec": 30 00:31:47.207 } 00:31:47.207 }, 00:31:47.207 { 00:31:47.207 "method": "bdev_nvme_set_options", 00:31:47.207 "params": { 00:31:47.207 "action_on_timeout": "none", 00:31:47.207 "allow_accel_sequence": false, 00:31:47.207 "arbitration_burst": 0, 00:31:47.207 "bdev_retry_count": 3, 00:31:47.207 "ctrlr_loss_timeout_sec": 0, 00:31:47.207 "delay_cmd_submit": true, 00:31:47.207 "dhchap_dhgroups": [ 00:31:47.207 "null", 00:31:47.207 "ffdhe2048", 00:31:47.207 "ffdhe3072", 00:31:47.207 "ffdhe4096", 00:31:47.207 "ffdhe6144", 00:31:47.207 "ffdhe8192" 00:31:47.207 ], 00:31:47.207 "dhchap_digests": [ 00:31:47.207 "sha256", 00:31:47.207 "sha384", 00:31:47.207 "sha512" 00:31:47.207 ], 00:31:47.207 "disable_auto_failback": false, 00:31:47.207 "fast_io_fail_timeout_sec": 0, 00:31:47.207 "generate_uuids": false, 00:31:47.208 "high_priority_weight": 0, 00:31:47.208 "io_path_stat": false, 00:31:47.208 "io_queue_requests": 512, 00:31:47.208 "keep_alive_timeout_ms": 10000, 00:31:47.208 "low_priority_weight": 0, 00:31:47.208 "medium_priority_weight": 0, 00:31:47.208 "nvme_adminq_poll_period_us": 10000, 00:31:47.208 "nvme_error_stat": false, 00:31:47.208 "nvme_ioq_poll_period_us": 0, 00:31:47.208 "rdma_cm_event_timeout_ms": 0, 00:31:47.208 "rdma_max_cq_size": 0, 00:31:47.208 "rdma_srq_size": 0, 00:31:47.208 "reconnect_delay_sec": 0, 00:31:47.208 "timeout_admin_us": 0, 00:31:47.208 "timeout_us": 0, 00:31:47.208 "transport_ack_timeout": 0, 00:31:47.208 "transport_retry_count": 4, 00:31:47.208 "transport_tos": 0 00:31:47.208 } 00:31:47.208 }, 00:31:47.208 { 00:31:47.208 "method": "bdev_nvme_attach_controller", 00:31:47.208 "params": { 00:31:47.208 "adrfam": "IPv4", 00:31:47.208 "ctrlr_loss_timeout_sec": 0, 00:31:47.208 "ddgst": false, 00:31:47.208 "fast_io_fail_timeout_sec": 0, 00:31:47.208 "hdgst": false, 00:31:47.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:47.208 "multipath": "multipath", 00:31:47.208 "name": "nvme0", 00:31:47.208 "prchk_guard": false, 00:31:47.208 "prchk_reftag": false, 00:31:47.208 "psk": "key0", 00:31:47.208 "reconnect_delay_sec": 0, 00:31:47.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.208 "traddr": "127.0.0.1", 00:31:47.208 "trsvcid": "4420", 00:31:47.208 "trtype": "TCP" 00:31:47.208 } 00:31:47.208 }, 00:31:47.208 { 00:31:47.208 "method": "bdev_nvme_set_hotplug", 00:31:47.208 "params": { 00:31:47.208 "enable": false, 00:31:47.208 "period_us": 100000 00:31:47.208 } 00:31:47.208 }, 00:31:47.208 { 00:31:47.208 "method": "bdev_wait_for_examine" 00:31:47.208 } 00:31:47.208 ] 00:31:47.208 }, 00:31:47.208 { 00:31:47.208 "subsystem": "nbd", 00:31:47.208 "config": [] 00:31:47.208 } 00:31:47.208 ] 00:31:47.208 }' 00:31:47.208 14:01:27 keyring_file -- keyring/file.sh@115 -- # killprocess 111685 00:31:47.208 14:01:27 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 111685 ']' 00:31:47.208 14:01:27 keyring_file -- common/autotest_common.sh@956 -- # kill -0 111685 00:31:47.208 14:01:27 keyring_file -- common/autotest_common.sh@957 -- # uname 00:31:47.208 14:01:27 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:47.208 14:01:27 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111685 00:31:47.208 14:01:28 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:47.208 14:01:28 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:47.208 killing process with pid 111685 00:31:47.208 14:01:28 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111685' 00:31:47.208 Received shutdown signal, test time was about 1.000000 seconds 00:31:47.208 00:31:47.208 Latency(us) 00:31:47.208 [2024-11-06T14:01:28.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.208 [2024-11-06T14:01:28.141Z] =================================================================================================================== 00:31:47.208 [2024-11-06T14:01:28.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:47.208 14:01:28 keyring_file -- common/autotest_common.sh@971 -- # kill 111685 00:31:47.208 14:01:28 keyring_file -- common/autotest_common.sh@976 -- # wait 111685 00:31:47.467 14:01:28 keyring_file -- keyring/file.sh@118 -- # bperfpid=112148 00:31:47.467 14:01:28 keyring_file -- keyring/file.sh@120 -- # waitforlisten 112148 /var/tmp/bperf.sock 00:31:47.467 14:01:28 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 112148 ']' 00:31:47.467 14:01:28 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:47.467 14:01:28 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:47.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:47.467 14:01:28 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:47.467 14:01:28 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:47.467 14:01:28 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:47.467 14:01:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:47.467 14:01:28 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:31:47.467 "subsystems": [ 00:31:47.467 { 00:31:47.467 "subsystem": "keyring", 00:31:47.467 "config": [ 00:31:47.467 { 00:31:47.467 "method": "keyring_file_add_key", 00:31:47.467 "params": { 00:31:47.467 "name": "key0", 00:31:47.467 "path": "/tmp/tmp.y0Mk8X2pOL" 00:31:47.467 } 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "method": "keyring_file_add_key", 00:31:47.467 "params": { 00:31:47.467 "name": "key1", 00:31:47.467 "path": "/tmp/tmp.c3rQrgzTYw" 00:31:47.467 } 00:31:47.467 } 00:31:47.467 ] 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "subsystem": "iobuf", 00:31:47.467 "config": [ 00:31:47.467 { 00:31:47.467 "method": "iobuf_set_options", 00:31:47.467 "params": { 00:31:47.467 "enable_numa": false, 00:31:47.467 "large_bufsize": 135168, 00:31:47.467 "large_pool_count": 1024, 00:31:47.467 "small_bufsize": 8192, 00:31:47.467 "small_pool_count": 8192 00:31:47.467 } 00:31:47.467 } 00:31:47.467 ] 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "subsystem": "sock", 00:31:47.467 "config": [ 00:31:47.467 { 00:31:47.467 "method": "sock_set_default_impl", 00:31:47.467 "params": { 00:31:47.467 "impl_name": "posix" 00:31:47.467 } 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "method": "sock_impl_set_options", 00:31:47.467 "params": { 00:31:47.467 "enable_ktls": false, 00:31:47.467 "enable_placement_id": 0, 00:31:47.467 "enable_quickack": false, 00:31:47.467 "enable_recv_pipe": true, 00:31:47.467 "enable_zerocopy_send_client": false, 00:31:47.467 "enable_zerocopy_send_server": true, 00:31:47.467 "impl_name": "ssl", 00:31:47.467 "recv_buf_size": 4096, 00:31:47.467 "send_buf_size": 4096, 00:31:47.467 "tls_version": 0, 00:31:47.467 "zerocopy_threshold": 0 00:31:47.467 } 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "method": "sock_impl_set_options", 00:31:47.467 "params": { 00:31:47.467 "enable_ktls": false, 00:31:47.467 "enable_placement_id": 0, 00:31:47.467 "enable_quickack": false, 00:31:47.467 "enable_recv_pipe": true, 00:31:47.467 "enable_zerocopy_send_client": false, 00:31:47.467 "enable_zerocopy_send_server": true, 00:31:47.467 "impl_name": "posix", 00:31:47.467 "recv_buf_size": 2097152, 00:31:47.467 "send_buf_size": 2097152, 00:31:47.467 "tls_version": 0, 00:31:47.467 "zerocopy_threshold": 0 00:31:47.467 } 00:31:47.467 } 00:31:47.467 ] 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "subsystem": "vmd", 00:31:47.467 "config": [] 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "subsystem": "accel", 00:31:47.467 "config": [ 00:31:47.467 { 00:31:47.467 "method": "accel_set_options", 00:31:47.467 "params": { 00:31:47.467 "buf_count": 2048, 00:31:47.467 "large_cache_size": 16, 00:31:47.467 "sequence_count": 2048, 00:31:47.467 "small_cache_size": 128, 00:31:47.467 "task_count": 2048 00:31:47.467 } 00:31:47.467 } 00:31:47.467 ] 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "subsystem": "bdev", 00:31:47.467 "config": [ 00:31:47.467 { 00:31:47.467 "method": "bdev_set_options", 00:31:47.467 "params": { 00:31:47.467 "bdev_auto_examine": true, 00:31:47.467 "bdev_io_cache_size": 256, 00:31:47.467 "bdev_io_pool_size": 65535, 00:31:47.467 "iobuf_large_cache_size": 16, 00:31:47.467 "iobuf_small_cache_size": 128 00:31:47.467 } 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "method": "bdev_raid_set_options", 00:31:47.467 "params": { 00:31:47.467 "process_max_bandwidth_mb_sec": 0, 00:31:47.467 "process_window_size_kb": 1024 00:31:47.467 } 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "method": "bdev_iscsi_set_options", 00:31:47.467 "params": { 00:31:47.467 "timeout_sec": 30 00:31:47.467 } 00:31:47.467 }, 00:31:47.467 { 00:31:47.467 "method": "bdev_nvme_set_options", 00:31:47.467 "params": { 00:31:47.467 "action_on_timeout": "none", 00:31:47.467 "allow_accel_sequence": false, 00:31:47.467 "arbitration_burst": 0, 00:31:47.467 "bdev_retry_count": 3, 00:31:47.467 "ctrlr_loss_timeout_sec": 0, 00:31:47.467 "delay_cmd_submit": true, 00:31:47.467 "dhchap_dhgroups": [ 00:31:47.467 "null", 00:31:47.467 "ffdhe2048", 00:31:47.467 "ffdhe3072", 00:31:47.467 "ffdhe4096", 00:31:47.467 "ffdhe6144", 00:31:47.467 "ffdhe8192" 00:31:47.467 ], 00:31:47.467 "dhchap_digests": [ 00:31:47.467 "sha256", 00:31:47.467 "sha384", 00:31:47.467 "sha512" 00:31:47.467 ], 00:31:47.467 "disable_auto_failback": false, 00:31:47.467 "fast_io_fail_timeout_sec": 0, 00:31:47.467 "generate_uuids": false, 00:31:47.467 "high_priority_weight": 0, 00:31:47.467 "io_path_stat": false, 00:31:47.467 "io_queue_requests": 512, 00:31:47.467 "keep_alive_timeout_ms": 10000, 00:31:47.467 "low_priority_weight": 0, 00:31:47.467 "medium_priority_weight": 0, 00:31:47.467 "nvme_adminq_poll_period_us": 10000, 00:31:47.467 "nvme_error_stat": false, 00:31:47.467 "nvme_ioq_poll_period_us": 0, 00:31:47.467 "rdma_cm_event_timeout_ms": 0, 00:31:47.467 "rdma_max_cq_size": 0, 00:31:47.467 "rdma_srq_size": 0, 00:31:47.467 "reconnect_delay_sec": 0, 00:31:47.467 "timeout_admin_us": 0, 00:31:47.467 "timeout_us": 0, 00:31:47.467 "transport_ack_timeout": 0, 00:31:47.467 "transport_retry_count": 4, 00:31:47.468 "transport_tos": 0 00:31:47.468 } 00:31:47.468 }, 00:31:47.468 { 00:31:47.468 "method": "bdev_nvme_attach_controller", 00:31:47.468 "params": { 00:31:47.468 "adrfam": "IPv4", 00:31:47.468 "ctrlr_loss_timeout_sec": 0, 00:31:47.468 "ddgst": false, 00:31:47.468 "fast_io_fail_timeout_sec": 0, 00:31:47.468 "hdgst": false, 00:31:47.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:47.468 "multipath": "multipath", 00:31:47.468 "name": "nvme0", 00:31:47.468 "prchk_guard": false, 00:31:47.468 "prchk_reftag": false, 00:31:47.468 "psk": "key0", 00:31:47.468 "reconnect_delay_sec": 0, 00:31:47.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.468 "traddr": "127.0.0.1", 00:31:47.468 "trsvcid": "4420", 00:31:47.468 "trtype": "TCP" 00:31:47.468 } 00:31:47.468 }, 00:31:47.468 { 00:31:47.468 "method": "bdev_nvme_set_hotplug", 00:31:47.468 "params": { 00:31:47.468 "enable": false, 00:31:47.468 "period_us": 100000 00:31:47.468 } 00:31:47.468 }, 00:31:47.468 { 00:31:47.468 "method": "bdev_wait_for_examine" 00:31:47.468 } 00:31:47.468 ] 00:31:47.468 }, 00:31:47.468 { 00:31:47.468 "subsystem": "nbd", 00:31:47.468 "config": [] 00:31:47.468 } 00:31:47.468 ] 00:31:47.468 }' 00:31:47.468 [2024-11-06 14:01:28.324242] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:31:47.468 [2024-11-06 14:01:28.324321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112148 ] 00:31:47.727 [2024-11-06 14:01:28.477732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.727 [2024-11-06 14:01:28.544853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.985 [2024-11-06 14:01:28.760901] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:48.561 14:01:29 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:48.561 14:01:29 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:31:48.561 14:01:29 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:31:48.561 14:01:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.561 14:01:29 keyring_file -- keyring/file.sh@121 -- # jq length 00:31:48.561 14:01:29 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:48.561 14:01:29 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:31:48.561 14:01:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:48.561 14:01:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:48.561 14:01:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.561 14:01:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.561 14:01:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:48.819 14:01:29 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:31:48.819 14:01:29 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:31:48.819 14:01:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:48.819 14:01:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:48.819 14:01:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.819 14:01:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.819 14:01:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:49.077 14:01:29 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:31:49.077 14:01:29 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:31:49.077 14:01:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:49.077 14:01:29 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:31:49.335 14:01:30 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:31:49.335 14:01:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:49.335 14:01:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.y0Mk8X2pOL /tmp/tmp.c3rQrgzTYw 00:31:49.335 14:01:30 keyring_file -- keyring/file.sh@20 -- # killprocess 112148 00:31:49.335 14:01:30 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 112148 ']' 00:31:49.335 14:01:30 keyring_file -- common/autotest_common.sh@956 -- # kill -0 112148 00:31:49.335 14:01:30 keyring_file -- common/autotest_common.sh@957 -- # uname 00:31:49.335 14:01:30 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:49.335 14:01:30 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112148 00:31:49.335 14:01:30 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:49.335 14:01:30 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:49.335 killing process with pid 112148 00:31:49.335 14:01:30 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112148' 00:31:49.335 Received shutdown signal, test time was about 1.000000 seconds 00:31:49.335 00:31:49.335 Latency(us) 00:31:49.335 [2024-11-06T14:01:30.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.335 [2024-11-06T14:01:30.269Z] =================================================================================================================== 00:31:49.336 [2024-11-06T14:01:30.269Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:49.336 14:01:30 keyring_file -- common/autotest_common.sh@971 -- # kill 112148 00:31:49.336 14:01:30 keyring_file -- common/autotest_common.sh@976 -- # wait 112148 00:31:49.594 14:01:30 keyring_file -- keyring/file.sh@21 -- # killprocess 111650 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 111650 ']' 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@956 -- # kill -0 111650 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@957 -- # uname 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111650 00:31:49.594 killing process with pid 111650 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111650' 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@971 -- # kill 111650 00:31:49.594 14:01:30 keyring_file -- common/autotest_common.sh@976 -- # wait 111650 00:31:50.167 00:31:50.167 real 0m14.602s 00:31:50.167 user 0m34.121s 00:31:50.167 sys 0m4.312s 00:31:50.167 14:01:30 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:50.167 ************************************ 00:31:50.167 END TEST keyring_file 00:31:50.167 ************************************ 00:31:50.167 14:01:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:50.167 14:01:30 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:31:50.167 14:01:30 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:50.167 14:01:30 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:50.167 14:01:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:50.167 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:31:50.167 ************************************ 00:31:50.167 START TEST keyring_linux 00:31:50.167 ************************************ 00:31:50.167 14:01:30 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:50.167 Joined session keyring: 1073283324 00:31:50.167 * Looking for test storage... 00:31:50.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:50.167 14:01:31 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:50.167 14:01:31 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:31:50.167 14:01:31 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:50.167 14:01:31 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.167 14:01:31 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@345 -- # : 1 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.426 14:01:31 keyring_linux -- scripts/common.sh@368 -- # return 0 00:31:50.426 14:01:31 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.426 14:01:31 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:50.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.426 --rc genhtml_branch_coverage=1 00:31:50.426 --rc genhtml_function_coverage=1 00:31:50.426 --rc genhtml_legend=1 00:31:50.426 --rc geninfo_all_blocks=1 00:31:50.426 --rc geninfo_unexecuted_blocks=1 00:31:50.426 00:31:50.426 ' 00:31:50.426 14:01:31 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:50.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.426 --rc genhtml_branch_coverage=1 00:31:50.426 --rc genhtml_function_coverage=1 00:31:50.426 --rc genhtml_legend=1 00:31:50.426 --rc geninfo_all_blocks=1 00:31:50.426 --rc geninfo_unexecuted_blocks=1 00:31:50.426 00:31:50.426 ' 00:31:50.426 14:01:31 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:50.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.426 --rc genhtml_branch_coverage=1 00:31:50.426 --rc genhtml_function_coverage=1 00:31:50.426 --rc genhtml_legend=1 00:31:50.426 --rc geninfo_all_blocks=1 00:31:50.426 --rc geninfo_unexecuted_blocks=1 00:31:50.426 00:31:50.426 ' 00:31:50.426 14:01:31 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:50.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.426 --rc genhtml_branch_coverage=1 00:31:50.426 --rc genhtml_function_coverage=1 00:31:50.426 --rc genhtml_legend=1 00:31:50.426 --rc geninfo_all_blocks=1 00:31:50.426 --rc geninfo_unexecuted_blocks=1 00:31:50.426 00:31:50.426 ' 00:31:50.426 14:01:31 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:50.426 14:01:31 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.426 14:01:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9261117-b8af-4744-8c64-783a612116ce 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=f9261117-b8af-4744-8c64-783a612116ce 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:50.427 14:01:31 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.427 14:01:31 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.427 14:01:31 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.427 14:01:31 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.427 14:01:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.427 14:01:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.427 14:01:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.427 14:01:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:50.427 14:01:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:50.427 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:50.427 /tmp/:spdk-test:key0 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:31:50.427 14:01:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:50.427 /tmp/:spdk-test:key1 00:31:50.427 14:01:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=112312 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:50.427 14:01:31 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 112312 00:31:50.427 14:01:31 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 112312 ']' 00:31:50.427 14:01:31 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.427 14:01:31 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:50.427 14:01:31 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.427 14:01:31 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:50.427 14:01:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:50.427 [2024-11-06 14:01:31.322907] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:31:50.427 [2024-11-06 14:01:31.322990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112312 ] 00:31:50.686 [2024-11-06 14:01:31.475405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.686 [2024-11-06 14:01:31.526400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:31:51.621 14:01:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:51.621 [2024-11-06 14:01:32.207520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.621 null0 00:31:51.621 [2024-11-06 14:01:32.255429] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:51.621 [2024-11-06 14:01:32.255609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.621 14:01:32 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:51.621 188834288 00:31:51.621 14:01:32 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:51.621 802870301 00:31:51.621 14:01:32 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=112347 00:31:51.621 14:01:32 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:51.621 14:01:32 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 112347 /var/tmp/bperf.sock 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 112347 ']' 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:51.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:51.621 14:01:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:51.621 [2024-11-06 14:01:32.342931] Starting SPDK v25.01-pre git sha1 079966333 / DPDK 24.03.0 initialization... 00:31:51.621 [2024-11-06 14:01:32.343006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112347 ] 00:31:51.621 [2024-11-06 14:01:32.495221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.879 [2024-11-06 14:01:32.556785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.445 14:01:33 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:52.445 14:01:33 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:31:52.445 14:01:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:52.445 14:01:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:52.703 14:01:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:52.703 14:01:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:52.962 14:01:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:52.963 14:01:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:53.221 [2024-11-06 14:01:33.968341] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:53.221 nvme0n1 00:31:53.221 14:01:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:53.221 14:01:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:53.221 14:01:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:53.221 14:01:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:53.221 14:01:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.221 14:01:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:53.480 14:01:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:53.480 14:01:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:53.480 14:01:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:53.480 14:01:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:53.480 14:01:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:53.480 14:01:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.480 14:01:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:53.738 14:01:34 keyring_linux -- keyring/linux.sh@25 -- # sn=188834288 00:31:53.738 14:01:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:53.738 14:01:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:53.738 14:01:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 188834288 == \1\8\8\8\3\4\2\8\8 ]] 00:31:53.738 14:01:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 188834288 00:31:53.738 14:01:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:53.738 14:01:34 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:53.738 Running I/O for 1 seconds... 00:31:55.113 13009.00 IOPS, 50.82 MiB/s 00:31:55.113 Latency(us) 00:31:55.113 [2024-11-06T14:01:36.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.113 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:55.113 nvme0n1 : 1.01 13016.21 50.84 0.00 0.00 9784.76 8632.85 19160.73 00:31:55.113 [2024-11-06T14:01:36.046Z] =================================================================================================================== 00:31:55.113 [2024-11-06T14:01:36.046Z] Total : 13016.21 50.84 0.00 0.00 9784.76 8632.85 19160.73 00:31:55.113 { 00:31:55.113 "results": [ 00:31:55.113 { 00:31:55.113 "job": "nvme0n1", 00:31:55.113 "core_mask": "0x2", 00:31:55.113 "workload": "randread", 00:31:55.113 "status": "finished", 00:31:55.113 "queue_depth": 128, 00:31:55.113 "io_size": 4096, 00:31:55.113 "runtime": 1.00928, 00:31:55.113 "iops": 13016.209575142677, 00:31:55.113 "mibps": 50.84456865290108, 00:31:55.113 "io_failed": 0, 00:31:55.113 "io_timeout": 0, 00:31:55.113 "avg_latency_us": 9784.757786355896, 00:31:55.113 "min_latency_us": 8632.854618473895, 00:31:55.113 "max_latency_us": 19160.72610441767 00:31:55.113 } 00:31:55.113 ], 00:31:55.113 "core_count": 1 00:31:55.113 } 00:31:55.113 14:01:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:55.113 14:01:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:55.113 14:01:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:55.113 14:01:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:55.113 14:01:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:55.113 14:01:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:55.113 14:01:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:55.113 14:01:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:55.372 14:01:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:55.372 14:01:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:55.372 14:01:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:55.372 14:01:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:55.372 14:01:36 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:31:55.372 14:01:36 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:55.372 14:01:36 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:55.372 14:01:36 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:55.372 14:01:36 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:55.372 14:01:36 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:55.372 14:01:36 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:55.372 14:01:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:55.631 [2024-11-06 14:01:36.375513] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:55.631 [2024-11-06 14:01:36.375724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c13490 (107): Transport endpoint is not connected 00:31:55.631 [2024-11-06 14:01:36.376709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c13490 (9): Bad file descriptor 00:31:55.631 [2024-11-06 14:01:36.377707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:31:55.631 [2024-11-06 14:01:36.377723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:55.631 [2024-11-06 14:01:36.377733] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:55.631 [2024-11-06 14:01:36.377745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:31:55.631 2024/11/06 14:01:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:55.631 request: 00:31:55.631 { 00:31:55.631 "method": "bdev_nvme_attach_controller", 00:31:55.631 "params": { 00:31:55.631 "name": "nvme0", 00:31:55.631 "trtype": "tcp", 00:31:55.631 "traddr": "127.0.0.1", 00:31:55.631 "adrfam": "ipv4", 00:31:55.631 "trsvcid": "4420", 00:31:55.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:55.631 "prchk_reftag": false, 00:31:55.631 "prchk_guard": false, 00:31:55.631 "hdgst": false, 00:31:55.631 "ddgst": false, 00:31:55.631 "psk": ":spdk-test:key1", 00:31:55.631 "allow_unrecognized_csi": false 00:31:55.631 } 00:31:55.631 } 00:31:55.631 Got JSON-RPC error response 00:31:55.631 GoRPCClient: error on JSON-RPC call 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@33 -- # sn=188834288 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 188834288 00:31:55.631 1 links removed 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@33 -- # sn=802870301 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 802870301 00:31:55.631 1 links removed 00:31:55.631 14:01:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 112347 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 112347 ']' 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 112347 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112347 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:55.631 killing process with pid 112347 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112347' 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@971 -- # kill 112347 00:31:55.631 Received shutdown signal, test time was about 1.000000 seconds 00:31:55.631 00:31:55.631 Latency(us) 00:31:55.631 [2024-11-06T14:01:36.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.631 [2024-11-06T14:01:36.564Z] =================================================================================================================== 00:31:55.631 [2024-11-06T14:01:36.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:55.631 14:01:36 keyring_linux -- common/autotest_common.sh@976 -- # wait 112347 00:31:55.890 14:01:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 112312 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 112312 ']' 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 112312 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112312 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:55.890 killing process with pid 112312 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112312' 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@971 -- # kill 112312 00:31:55.890 14:01:36 keyring_linux -- common/autotest_common.sh@976 -- # wait 112312 00:31:56.495 00:31:56.495 real 0m6.224s 00:31:56.495 user 0m11.154s 00:31:56.495 sys 0m2.028s 00:31:56.495 ************************************ 00:31:56.495 END TEST keyring_linux 00:31:56.495 ************************************ 00:31:56.495 14:01:37 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:56.495 14:01:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:56.495 14:01:37 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:56.495 14:01:37 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:31:56.495 14:01:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:56.495 14:01:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:56.495 14:01:37 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:31:56.495 14:01:37 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:31:56.495 14:01:37 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:31:56.495 14:01:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.495 14:01:37 -- common/autotest_common.sh@10 -- # set +x 00:31:56.495 14:01:37 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:31:56.495 14:01:37 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:31:56.495 14:01:37 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:31:56.495 14:01:37 -- common/autotest_common.sh@10 -- # set +x 00:31:59.785 INFO: APP EXITING 00:31:59.785 INFO: killing all VMs 00:31:59.785 INFO: killing vhost app 00:31:59.785 INFO: EXIT DONE 00:32:00.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:00.044 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:00.044 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:00.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:01.239 Cleaning 00:32:01.239 Removing: /var/run/dpdk/spdk0/config 00:32:01.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:01.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:01.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:01.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:01.239 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:01.239 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:01.239 Removing: /var/run/dpdk/spdk1/config 00:32:01.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:01.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:01.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:01.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:01.239 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:01.239 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:01.239 Removing: /var/run/dpdk/spdk2/config 00:32:01.239 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:01.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:01.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:01.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:01.240 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:01.240 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:01.240 Removing: /var/run/dpdk/spdk3/config 00:32:01.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:01.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:01.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:01.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:01.240 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:01.240 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:01.240 Removing: /var/run/dpdk/spdk4/config 00:32:01.240 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:01.240 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:01.240 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:01.240 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:01.240 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:01.240 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:01.240 Removing: /dev/shm/nvmf_trace.0 00:32:01.240 Removing: /dev/shm/spdk_tgt_trace.pid58524 00:32:01.240 Removing: /var/run/dpdk/spdk0 00:32:01.240 Removing: /var/run/dpdk/spdk1 00:32:01.240 Removing: /var/run/dpdk/spdk2 00:32:01.240 Removing: /var/run/dpdk/spdk3 00:32:01.240 Removing: /var/run/dpdk/spdk4 00:32:01.240 Removing: /var/run/dpdk/spdk_pid101839 00:32:01.240 Removing: /var/run/dpdk/spdk_pid101885 00:32:01.240 Removing: /var/run/dpdk/spdk_pid102243 00:32:01.240 Removing: /var/run/dpdk/spdk_pid102293 00:32:01.240 Removing: /var/run/dpdk/spdk_pid102702 00:32:01.498 Removing: /var/run/dpdk/spdk_pid103282 00:32:01.498 Removing: /var/run/dpdk/spdk_pid103706 00:32:01.498 Removing: /var/run/dpdk/spdk_pid104786 00:32:01.498 Removing: /var/run/dpdk/spdk_pid105866 00:32:01.498 Removing: /var/run/dpdk/spdk_pid105984 00:32:01.498 Removing: /var/run/dpdk/spdk_pid106042 00:32:01.498 Removing: /var/run/dpdk/spdk_pid107678 00:32:01.498 Removing: /var/run/dpdk/spdk_pid108009 00:32:01.498 Removing: /var/run/dpdk/spdk_pid108349 00:32:01.498 Removing: /var/run/dpdk/spdk_pid108933 00:32:01.498 Removing: /var/run/dpdk/spdk_pid108944 00:32:01.498 Removing: /var/run/dpdk/spdk_pid109357 00:32:01.498 Removing: /var/run/dpdk/spdk_pid109517 00:32:01.498 Removing: /var/run/dpdk/spdk_pid109674 00:32:01.498 Removing: /var/run/dpdk/spdk_pid109772 00:32:01.498 Removing: /var/run/dpdk/spdk_pid109937 00:32:01.498 Removing: /var/run/dpdk/spdk_pid110046 00:32:01.498 Removing: /var/run/dpdk/spdk_pid110780 00:32:01.498 Removing: /var/run/dpdk/spdk_pid110814 00:32:01.498 Removing: /var/run/dpdk/spdk_pid110851 00:32:01.498 Removing: /var/run/dpdk/spdk_pid111110 00:32:01.498 Removing: /var/run/dpdk/spdk_pid111140 00:32:01.498 Removing: /var/run/dpdk/spdk_pid111175 00:32:01.498 Removing: /var/run/dpdk/spdk_pid111650 00:32:01.498 Removing: /var/run/dpdk/spdk_pid111685 00:32:01.499 Removing: /var/run/dpdk/spdk_pid112148 00:32:01.499 Removing: /var/run/dpdk/spdk_pid112312 00:32:01.499 Removing: /var/run/dpdk/spdk_pid112347 00:32:01.499 Removing: /var/run/dpdk/spdk_pid58360 00:32:01.499 Removing: /var/run/dpdk/spdk_pid58524 00:32:01.499 Removing: /var/run/dpdk/spdk_pid58793 00:32:01.499 Removing: /var/run/dpdk/spdk_pid58886 00:32:01.499 Removing: /var/run/dpdk/spdk_pid58931 00:32:01.499 Removing: /var/run/dpdk/spdk_pid59040 00:32:01.499 Removing: /var/run/dpdk/spdk_pid59069 00:32:01.499 Removing: /var/run/dpdk/spdk_pid59213 00:32:01.499 Removing: /var/run/dpdk/spdk_pid59498 00:32:01.499 Removing: /var/run/dpdk/spdk_pid59682 00:32:01.499 Removing: /var/run/dpdk/spdk_pid59773 00:32:01.499 Removing: /var/run/dpdk/spdk_pid59873 00:32:01.499 Removing: /var/run/dpdk/spdk_pid59976 00:32:01.499 Removing: /var/run/dpdk/spdk_pid60014 00:32:01.499 Removing: /var/run/dpdk/spdk_pid60050 00:32:01.499 Removing: /var/run/dpdk/spdk_pid60114 00:32:01.499 Removing: /var/run/dpdk/spdk_pid60237 00:32:01.499 Removing: /var/run/dpdk/spdk_pid60871 00:32:01.499 Removing: /var/run/dpdk/spdk_pid60935 00:32:01.499 Removing: /var/run/dpdk/spdk_pid61004 00:32:01.499 Removing: /var/run/dpdk/spdk_pid61032 00:32:01.499 Removing: /var/run/dpdk/spdk_pid61123 00:32:01.499 Removing: /var/run/dpdk/spdk_pid61151 00:32:01.499 Removing: /var/run/dpdk/spdk_pid61230 00:32:01.499 Removing: /var/run/dpdk/spdk_pid61259 00:32:01.499 Removing: /var/run/dpdk/spdk_pid61316 00:32:01.499 Removing: /var/run/dpdk/spdk_pid61346 00:32:01.759 Removing: /var/run/dpdk/spdk_pid61393 00:32:01.759 Removing: /var/run/dpdk/spdk_pid61423 00:32:01.759 Removing: /var/run/dpdk/spdk_pid61589 00:32:01.759 Removing: /var/run/dpdk/spdk_pid61623 00:32:01.759 Removing: /var/run/dpdk/spdk_pid61709 00:32:01.759 Removing: /var/run/dpdk/spdk_pid62201 00:32:01.759 Removing: /var/run/dpdk/spdk_pid62609 00:32:01.759 Removing: /var/run/dpdk/spdk_pid65127 00:32:01.759 Removing: /var/run/dpdk/spdk_pid65167 00:32:01.759 Removing: /var/run/dpdk/spdk_pid65538 00:32:01.759 Removing: /var/run/dpdk/spdk_pid65588 00:32:01.759 Removing: /var/run/dpdk/spdk_pid66006 00:32:01.759 Removing: /var/run/dpdk/spdk_pid66604 00:32:01.759 Removing: /var/run/dpdk/spdk_pid67056 00:32:01.759 Removing: /var/run/dpdk/spdk_pid68121 00:32:01.759 Removing: /var/run/dpdk/spdk_pid69228 00:32:01.759 Removing: /var/run/dpdk/spdk_pid69345 00:32:01.759 Removing: /var/run/dpdk/spdk_pid69413 00:32:01.759 Removing: /var/run/dpdk/spdk_pid71055 00:32:01.759 Removing: /var/run/dpdk/spdk_pid71406 00:32:01.759 Removing: /var/run/dpdk/spdk_pid75391 00:32:01.759 Removing: /var/run/dpdk/spdk_pid75825 00:32:01.759 Removing: /var/run/dpdk/spdk_pid76469 00:32:01.759 Removing: /var/run/dpdk/spdk_pid77016 00:32:01.759 Removing: /var/run/dpdk/spdk_pid82600 00:32:01.759 Removing: /var/run/dpdk/spdk_pid83097 00:32:01.759 Removing: /var/run/dpdk/spdk_pid83206 00:32:01.759 Removing: /var/run/dpdk/spdk_pid83364 00:32:01.759 Removing: /var/run/dpdk/spdk_pid83417 00:32:01.759 Removing: /var/run/dpdk/spdk_pid83469 00:32:01.759 Removing: /var/run/dpdk/spdk_pid83522 00:32:01.759 Removing: /var/run/dpdk/spdk_pid83695 00:32:01.759 Removing: /var/run/dpdk/spdk_pid83859 00:32:01.759 Removing: /var/run/dpdk/spdk_pid84145 00:32:01.759 Removing: /var/run/dpdk/spdk_pid84277 00:32:01.759 Removing: /var/run/dpdk/spdk_pid84537 00:32:01.759 Removing: /var/run/dpdk/spdk_pid84671 00:32:01.759 Removing: /var/run/dpdk/spdk_pid84800 00:32:01.759 Removing: /var/run/dpdk/spdk_pid85201 00:32:01.759 Removing: /var/run/dpdk/spdk_pid85664 00:32:01.759 Removing: /var/run/dpdk/spdk_pid85665 00:32:01.759 Removing: /var/run/dpdk/spdk_pid85666 00:32:01.759 Removing: /var/run/dpdk/spdk_pid85952 00:32:01.759 Removing: /var/run/dpdk/spdk_pid86237 00:32:01.759 Removing: /var/run/dpdk/spdk_pid86674 00:32:01.759 Removing: /var/run/dpdk/spdk_pid87057 00:32:01.759 Removing: /var/run/dpdk/spdk_pid87676 00:32:01.759 Removing: /var/run/dpdk/spdk_pid87678 00:32:01.759 Removing: /var/run/dpdk/spdk_pid88075 00:32:01.759 Removing: /var/run/dpdk/spdk_pid88089 00:32:01.759 Removing: /var/run/dpdk/spdk_pid88109 00:32:01.759 Removing: /var/run/dpdk/spdk_pid88134 00:32:02.019 Removing: /var/run/dpdk/spdk_pid88143 00:32:02.019 Removing: /var/run/dpdk/spdk_pid88555 00:32:02.019 Removing: /var/run/dpdk/spdk_pid88604 00:32:02.019 Removing: /var/run/dpdk/spdk_pid89001 00:32:02.019 Removing: /var/run/dpdk/spdk_pid89258 00:32:02.019 Removing: /var/run/dpdk/spdk_pid89801 00:32:02.019 Removing: /var/run/dpdk/spdk_pid90445 00:32:02.019 Removing: /var/run/dpdk/spdk_pid91827 00:32:02.019 Removing: /var/run/dpdk/spdk_pid92486 00:32:02.019 Removing: /var/run/dpdk/spdk_pid92492 00:32:02.019 Removing: /var/run/dpdk/spdk_pid94559 00:32:02.019 Removing: /var/run/dpdk/spdk_pid94655 00:32:02.019 Removing: /var/run/dpdk/spdk_pid94745 00:32:02.019 Removing: /var/run/dpdk/spdk_pid94831 00:32:02.019 Removing: /var/run/dpdk/spdk_pid94995 00:32:02.019 Removing: /var/run/dpdk/spdk_pid95085 00:32:02.019 Removing: /var/run/dpdk/spdk_pid95175 00:32:02.019 Removing: /var/run/dpdk/spdk_pid95260 00:32:02.019 Removing: /var/run/dpdk/spdk_pid95661 00:32:02.019 Removing: /var/run/dpdk/spdk_pid96443 00:32:02.019 Removing: /var/run/dpdk/spdk_pid97866 00:32:02.019 Removing: /var/run/dpdk/spdk_pid98072 00:32:02.019 Removing: /var/run/dpdk/spdk_pid98373 00:32:02.019 Removing: /var/run/dpdk/spdk_pid98934 00:32:02.019 Removing: /var/run/dpdk/spdk_pid99333 00:32:02.019 Clean 00:32:02.019 14:01:42 -- common/autotest_common.sh@1451 -- # return 0 00:32:02.019 14:01:42 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:02.019 14:01:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:02.019 14:01:42 -- common/autotest_common.sh@10 -- # set +x 00:32:02.277 14:01:42 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:02.277 14:01:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:02.277 14:01:42 -- common/autotest_common.sh@10 -- # set +x 00:32:02.277 14:01:43 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:02.277 14:01:43 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:02.277 14:01:43 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:02.277 14:01:43 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:02.277 14:01:43 -- spdk/autotest.sh@394 -- # hostname 00:32:02.277 14:01:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:02.536 geninfo: WARNING: invalid characters removed from testname! 00:32:29.079 14:02:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:30.982 14:02:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:33.548 14:02:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:35.456 14:02:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:37.992 14:02:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:39.897 14:02:20 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:42.429 14:02:22 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:42.429 14:02:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:42.429 14:02:22 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:42.429 14:02:22 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:42.429 14:02:22 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:42.429 14:02:22 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:42.429 + [[ -n 5206 ]] 00:32:42.430 + sudo kill 5206 00:32:42.439 [Pipeline] } 00:32:42.455 [Pipeline] // timeout 00:32:42.460 [Pipeline] } 00:32:42.474 [Pipeline] // stage 00:32:42.480 [Pipeline] } 00:32:42.493 [Pipeline] // catchError 00:32:42.501 [Pipeline] stage 00:32:42.503 [Pipeline] { (Stop VM) 00:32:42.515 [Pipeline] sh 00:32:42.797 + vagrant halt 00:32:45.330 ==> default: Halting domain... 00:32:51.912 [Pipeline] sh 00:32:52.194 + vagrant destroy -f 00:32:55.480 ==> default: Removing domain... 00:32:55.493 [Pipeline] sh 00:32:55.776 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:32:55.785 [Pipeline] } 00:32:55.799 [Pipeline] // stage 00:32:55.805 [Pipeline] } 00:32:55.818 [Pipeline] // dir 00:32:55.823 [Pipeline] } 00:32:55.837 [Pipeline] // wrap 00:32:55.843 [Pipeline] } 00:32:55.854 [Pipeline] // catchError 00:32:55.863 [Pipeline] stage 00:32:55.865 [Pipeline] { (Epilogue) 00:32:55.878 [Pipeline] sh 00:32:56.159 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:01.446 [Pipeline] catchError 00:33:01.447 [Pipeline] { 00:33:01.460 [Pipeline] sh 00:33:01.746 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:01.747 Artifacts sizes are good 00:33:01.758 [Pipeline] } 00:33:01.772 [Pipeline] // catchError 00:33:01.782 [Pipeline] archiveArtifacts 00:33:01.788 Archiving artifacts 00:33:01.894 [Pipeline] cleanWs 00:33:01.904 [WS-CLEANUP] Deleting project workspace... 00:33:01.904 [WS-CLEANUP] Deferred wipeout is used... 00:33:01.911 [WS-CLEANUP] done 00:33:01.912 [Pipeline] } 00:33:01.926 [Pipeline] // stage 00:33:01.931 [Pipeline] } 00:33:01.944 [Pipeline] // node 00:33:01.949 [Pipeline] End of Pipeline 00:33:01.984 Finished: SUCCESS